report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
|---|---|
The Chugach National Forest was established in 1907 and is the second largest forest in the National Forest System. The forest extends for over 200 miles along the Alaska coastline southeast of Anchorage, encompasses over 5 million acres, and is bordered by two national parks, a state park, and a national wildlife refuge. A forest supervisor located in Anchorage manages the Chugach, and a regional forester located in Juneau has overall responsibility for the Chugach and the other national forest in Alaska. As shown in figure 1, the Chugach covers four major areas, each of which presents agency planners and the public with significantly different issues. As shown in figure 1, the westernmost major area of the Chugach is the Kenai Peninsula area. This area—the nearest to Alaska’s largest urban population center, Anchorage—is a growing recreation area, especially for motorized recreation such as snowmobiles and all-terrain vehicles. It also has a significant population of Alaska Brown Bears that the State of Alaska has designated as being of special concern because of vulnerability to human impacts. The Nellie Juan-College Fiord Wilderness Study Area is a greatly glaciated, remote area lacking roads. The Forest Service had recommended that the Congress designate much of this area as wilderness where human activities would be significantly limited. Pending this designation, or its rejection, it is being managed as such. The Prince William Sound area contains large offshore islands, such as Montague Island, where timber has been harvested in the past. The city of Valdez, located on the Sound, is the southern terminus of the Trans-Alaska Pipeline. In 1989, the oil tanker Exxon Valdez spilled millions of gallons of oil into the sound, significantly damaging marine and shore animals and their habitat and requiring a multiyear, multibillion-dollar federal clean-up effort. The Copper River Delta area, located at the forest’s eastern edge, is dominated by one of North America’s most significant rivers for salmon production. The city of Cordova contains major fish-processing facilities. At around 2 million acres, the Delta is the most extensive wetlands complex on the Pacific coast of North America and is a highly productive ecosystem for shorebirds, waterfowl, and fish. Portions of the Delta may contain commercial oil and gas. The original Chugach forest plan was adopted in July 1984, but several environmental groups appealed the plan through the Forest Service’s appeal procedures, arguing that the plan was based on an analysis that had been done at too large a geographical scale to identify many actual effects in specific areas on the ground. As a result, they contended, the plan allowed too much development and was likely to damage fish and wildlife resources. The Forest Service and the appellants negotiated a settlement agreement in 1985. In January 1986, Forest Service officials amended the 1984 plan to incorporate the requirements of the agreement. Among other things, the 1986 amendment limited timber sales on the Chugach to an average of about 8 million board feet per year over the next decade, or about half the originally approved plan’s level of over 16 million board feet. The amendment also committed the Chugach to conduct further studies at smaller geographic scales, which might result in additional amendments to the plan. The National Forest Management Act (NFMA) of 1976 requires the Forest Service to, among other things, (1) develop a plan to manage the lands and resources of each national forest in coordination with the land management planning processes of other federal agencies, states, and localities and (2) revise each plan at least every 15 years. The Forest Service’s planning regulations in effect during the Chugach revision process established detailed procedures for developing a forest plan. These procedures required the agency to develop several alternatives for managing a forest and to make these alternatives available for public comment. Furthermore, the regulations required the agency to develop an environmental impact statement in accordance with the National Environmental Policy Act of 1969 (NEPA) to accompany each forest plan. An environmental impact statement assesses the effects of a major federal action that may significantly affect the quality of the human environment. In accordance with the process specified in the agency’s then-existing planning regulations, the Chugach Forest Supervisor appointed an interdisciplinary team that began revising the forest plan in April 1997. Agency regulations encouraged the public to participate throughout the planning process in order to (1) broaden the agency’s information base; (2) ensure that the agency understands the needs, concerns, and values of the public; (3) inform the public of the agency’s planning activities; and (4) provide the public with an understanding of the agency’s programs and proposed actions. The interdisciplinary team, working with the public, developed 30 alternative plans. Each alternative proposed a different combination of recreation, wildlife, mining, timber, subsistence, wilderness, and other uses in the Chugach. The team then combined similar alternatives to reduce the number of alternatives down to six. It then conducted a detailed comparative analysis of these six alternatives together with a required “no action” (i.e., no change) alternative, and a preferred agency alternative developed by the Forest Supervisor that combined features from all six alternative plans. In September 2000, the Supervisor of the Chugach National Forest issued for public comment the agency’s draft preferred alternative together with the detailed comparative analysis of all these alternatives. Forest Service officials reviewed public comments received on the preferred alternative and, in response to these comments as well as further study, made changes to its preferred alternative for the Forest Supervisor’s approval. Upon approval, the Forest Supervisor forwarded the proposed final plan and an accompanying environmental impact statement to the Regional Forester for approval. In May 2002, the Regional Forester approved the final revised plan. According to Forest Service officials, the vast majority of forest plans, since the first one adopted in 1982, have been administratively appealed, and many have subsequently have been litigated in federal courts. The Forest Service undertook sustained actions to solicit and respond to key public concerns about the revision to the Chugach forest plan. These actions included (1) distributing frequent newsletters on the planning process and its progress, (2) maintaining a Web site on the Internet with links to key planning documents and making available compact discs containing these documents, and (3) holding over 100 meetings in which the public was invited to define key issues and formulate alternatives. These extensive actions went beyond those required under the agency’s planning process and those used in previous forest planning exercises. Also unique to this plan was the intensive work of the Chugach’s interdisciplinary team in what the agency termed a “collaborative learning process” to help members of the public fashion their own varied alternatives. As a result of this outreach, the agency received thousands of comments on its draft preferred alternative. These comments generally reflected differing viewpoints about desirable trade-offs among competing uses of the Chugach. For example, many interested parties expressed concern that too much land was being allocated for motorized versus nonmotorized recreation, while others believed that too much land was being proposed for wilderness designations versus more intensive uses. Forest Service officials told us that, in general, nearly all parties agreed that they did not want the Chugach to change from its generally undisturbed character and existing usages, but that they disagreed over what posed the biggest threat to existing conditions and uses. Some thought the greatest threat came from increased development while others felt it came from increased restrictions on existing uses. Most suggested that the revised plan should place emphasis on preserving, rather than developing, the forest’s lands and resources. To respond to these concerns, the agency (1) obtained additional information, (2) held additional meetings with the public, and (3) considered specific changes to its preferred alternative. Throughout the planning process all parties were provided numerous opportunities to place on the record their concerns about forest issues. Some members of the public told us that they felt there were times during the planning process that some Forest Service staff inappropriately expressed personal views on issues but that, during the long planning process, those staff transferred to other agency assignments and their replacements did not seem to share those views. Although some interested and affected parties still had concerns about the results of the revision process and the agency’s draft preferred alternative, virtually all parties told us they believed the Forest Service had included important elements of their views in the draft revised plan. In developing the draft revised plan, the Forest Service obtained and analyzed a vast amount of data on timber harvesting, mineral mining, commercial fishing, recreation and tourism, forest vegetation, and fish and wildlife habitats. These data and analyses were used to make various decisions on difficult and sometimes controversial trade-offs among competing forest uses. In three areas, we found that the data and analyses used for making some decisions had limitations that were not disclosed in the draft revised plan. These decisions involved (1) possible commercial harvesting of timber, (2) potential mining for minerals, and (3) protection of a potentially at-risk brown bear population. Possible commercial harvesting of timber. The Forest Service did not calculate the maximum quantity of timber that might be sold over a decade from the area of suitable land covered by the forest plan. The agency did not perform such an analysis because officials believed (1) they were not legally required to do so, (2) commercial timber harvesting was not economically feasible in the Chugach because of the generally low quality of timber in the forest, low market prices, and the lack of nearby large markets, and (3) gathering and analyzing the data needed to calculate the maximum quantity was not worth the time, expense, and difficulty of doing so. However, the agency’s draft revision did not discuss the limitations of the analysis on which this decision was based. Potential mining for minerals. In order to make decisions about areas in the Chugach where mining would be permitted, the Forest Service analyzed a substantial body of data that it had gathered on past mining activities in the forest. However, such past activities were conducted in only a small portion of the forest, typically in areas accessible from existing forest roads. Forest Service officials told us that the Department of the Interior had estimated that it would cost approximately $8 million to survey the entire forest to determine the full potential for mineral mining activities. They believed that such costs were not warranted in view of other priorities of the forest competing for limited funds and that they were justified in basing decisions on the information that they had gathered on the past mining activities. However, the agency’s draft revised plan did not discuss the limitations of the analysis on which it based its decisions on mining activities within the forest. Protection of a potentially at-risk brown bear population. An interagency study on the size and trends of the brown bear population on the Kenai Peninsula of the Chugach, which may be at risk from human activities, was produced while the Forest Service was reviewing public comments on its draft-revised plan. This study, which the Forest Service participated in, reported that data are not available to determine whether a stable brown bear population currently exists in the peninsula and whether additional measures are needed to maintain the population’s viability in the presence of all types of human uses of the peninsula. The study calls for additional research to help answer these questions. The Forest Service’s draft revised plan did not disclose the findings of this interagency study nor did it identify steps that the agency would take to obtain additional data. Neither had the agency included reference to it in changes to the draft made during the comment period on it while our review was being conducted. Should evidence suggesting serious problems with the population trends of the Kenai brown bear become available during the time frame covered by the Chugach’s revised forest plan, it may be necessary to make changes to the plan that could unexpectedly alter planned human uses in some areas of the forest. In March 2002, we met with the Forest Supervisor and other Forest Service officials and told them that our review indicated that limitations existed in some of the agency’s data and analyses and that the draft plan neither disclosed such limitations nor identified planned actions to address them. In May 2002, the agency issued a final revised Chugach forest plan. Our review of the final plan and discussions with agency officials indicate that the agency has addressed our concerns by (1) agreeing to augment their analysis of data regarding timber harvesting in the forest, (2) explaining the limitations of data on potential mineral deposits in the forest and the agency’s decision to not incur costs associated with performing a comprehensive survey to determine the potential for mining minerals throughout the forest, and (3) referring to the findings of the interagency brown bear study and the agency’s planned monitoring of the brown bear population. In addressing these concerns, the agency also completed an internal science consistency evaluation considering data and limitations. We provided a draft of this report to the Supervisor of the Chugach National Forest for review and comment. He generally concurred with our findings and made certain technical suggestions that we incorporated as appropriate. We conducted our work from August 2001 through July 2002 in accordance with generally accepted government auditing standards. We visited the Chugach National Forest and obtained the views of and related documentation from Forest Service, state, industry, and environmental group officials located in Alaska and Forest Service headquarters officials located in Washington, D.C. We are sending copies of this report to the Secretary of Agriculture and the Chief of the Forest Service. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please call me at 202-512-3841. Key contributors to this report were Charles S. Cotton, Richard P. Johnson, Chester M. Joy, and Edward A. Kratzer. In 1980, after several years of consideration, the Congress passed the Alaska National Interest Lands Conservation Act (ANILCA), which, among other things, addressed ongoing land disputes between the federal government and the state. Specifically, the act set aside millions of acres in “conservation system units,” a statutory term including national parks, wildlife refuges, wilderness areas, and wild and scenic rivers. The law also rescinded numerous public land withdrawals within Alaska by the President and Interior that had occurred in the 1970s. Section 1326(a) of ANILCA limited future executive branch land withdrawals of more than 5,000 acres in Alaska. In addition, section 1326(b) of ANILCA prohibited “further studies of federal lands for the single purpose of considering the establishment of a conservation system unit” or other similar units unless authorized by ANILCA or future legislation. The Forest Service and other interested parties hold differing views on how this section should be interpreted with regard to the Forest Service’s legal authority to recommend to the Congress that portions of the Chugach be managed as wilderness areas. Some supporters of greater development maintain that section 1326 (b) prohibits the Forest Service from making any such recommendations outside the existing Wilderness Study Area in the forest. On the other hand, the Forest Service maintains that it may do so without violating ANILCA, as long as the recommendation is not based upon a study performed for the single purpose of designating an area as wilderness. Stakeholders also disagree over the proper interpretation of Section 501(b) of ANILCA. Some supporters of greater development believe that this section requires that the Copper River Delta within the Chugach be managed for fish and wildlife conservation and prohibits the Forest Service from recommending to the Congress that any portion of delta be designated as a wilderness area. The Forest Service believes that the delta can contain wilderness designations so long as the applicable management direction for the delta provides for the primacy of fish and wildlife conservation. A review of the record before the Court reveals that the Forest Service did not study rivers in Alaska for the single purpose of considering the establishment of a conservation system unit. Rather the Forest Service conducted a study of the rivers for their eligibility as wild and scenic for the purposes of a general land management plan. Thus, no ANILCA violation occurred. Sierra Club v. Lyons, J00-0009 CV (JKS), Slip. Op. at 31 (March 30, 2001) (citations omitted). The court in Lyons also held that the Forest Service had violated its planning regulations by failing to evaluate roadless areas within the Tongass to determine whether any of these should be recommended for inclusion in the National Wilderness Preservation System. The court ordered the Forest Service to carry out such an evaluation, which is currently being performed. Supporters of greater development have asserted that section 1326(b) of ANILCA prohibits the Forest Service from studying national forest lands in Alaska for the purpose of considering additions to the Wild and Scenic River and Wilderness systems. Although the court in Lyons rejected this argument with respect to Wild and Scenic Rivers, these stakeholders contend that the government did not adequately explain the provisions of ANILCA to the court, thus leading the court to an erroneous conclusion. The Record of Decision states that the revised forest plan does not violate section 1326(b) because the plan is a general land management plan rather than a single purpose study. Stakeholders also disagree over the proper management of the Copper River Delta. The Copper River Delta is a highly productive ecosystem that may contain commercial oil and gas deposits. A House version of ANILCA would have designated the area as a National Wildlife Refuge. As ultimately enacted, section 501(b) retained the Copper River Delta under the Forest Service’s jurisdiction, but provided that the primary purpose of the area was to further fish and wildlife conservation. Some stakeholders have asserted that section 501(b) of ANILCA prohibits the Forest Service from recommending any areas within the Copper River Delta for designation as wilderness because such a designation would prohibit the Forest Service from undertaking certain actions to conserve fish and wildlife. These stakeholders have also asserted that proposed wilderness designation would hinder future oil and gas development near the town of Katalla. However, others who support preserving the forest’s lands and resources argued that some of the land management prescriptions in the draft plan would violate section 501(b) by failing to prohibit activities that conflict with the area’s purpose of conserving fish and wildlife, such as mining, road construction, and off-road vehicle use, among others. The revised plan does not recommend any areas for wilderness designation within the Copper River Delta. The Record of Decision states that each of the three management prescriptions applied to the Delta have fish and wildlife conservation as their primary goal. According to the ROD, each prescription provides for a different mix of multiple use activities consistent with the conservation of fish and wildlife and their habitat.
|
The Chugach National Forest in Alaska is the second largest of the 155 forests in the National Forest System and stretches across an immensely varied and scenic area. The Forest Service revised the Chugach National Forest Plan in accordance with planning regulations that require the Forest Service to solicit and respond to public concerns in (1) identifying issues to be considered in revising the plans, (2) developing alternative plans for evaluation, (3) selecting a draft preferred alternative plan, and (4) adopting a final revised plan. Forest Service officials actively solicited key public concerns about revising the Chugach forest plan by distributing frequent newsletters; maintaining a Web site to allow the public access to key documents; and holding over 100 public meetings on the plan, including ones to solicit potential alternatives, and later, to discuss its draft preferred alternative plan. In developing its draft revised plan, issued in September 2000, the Forest Service obtained and analyzed a vast amount of data on various potential uses of the lands and resources within the Chugach. These data and analysis provided information for decisions on difficult and sometimes controversial trade-offs among competing forest uses, such as timber harvesting, mineral mining, commercial fishing, recreation and tourism, forest vegetation, and fish and wildlife habitats. GAO's review showed that the data and analysis that the Forest Service used to make some decisions had limitations that were not disclosed in the draft revised plan.
|
Automating mail sorting with state-of-the-art technology is at the core of Postal Service initiatives to provide efficient, economically priced mail service. Mail addressed accurately and in the Postal Service’s standardized format is more compatible with these automated processes. However, the Manager of Address Management at Postal Service headquarters said that the single greatest barrier to the Postal Service’s effort to automate mail processing is “the poor quality of the address on the mail piece.” Mail addressed incorrectly or inadequately cannot be processed and delivered as quickly and efficiently as properly addressed mail. When mail is misaddressed, the Postal Service incurs added costs for sorting, transporting, delivering, and, in some cases, disposing of that mail. Of the 177 billion pieces of mail the Postal Service handled in 1994, nearly 5 billion pieces were addressed incorrectly. The Postal Service estimated that it incurred a cost of about $1.5 billion a year in compensating for poor address quality. However, the Postal Service had no information on the portion of this cost associated with a change of address. Because accurate addressing is essential for efficient mail service, the Postal Service and its predecessor, the Post Office Department, have provided address-correction services since 1924. These services, among other things, assist mailers in obtaining and using accurate, properly formatted addresses that are automation compatible. In 1986, the Postal Service implemented the NCOA program, which extends the Postal Service’s use of mail forwarding information to update business mailers’ address lists. The NCOA program is administered by the NCOA program office within the National Customer Support Center, which is located in Memphis, TN. The Center’s Director reports to the Manager, Address Management, under the Vice President for Operations Support. Before introducing this program, the Postal Service notified business mailers of changed addresses after their mail had been sent out and forwarded, returned, or discarded. The NCOA program, however, confronts this problem before the mail piece enters the mail stream by using contractors licensed by the Postal Service to provide business mailers updated change-of-address information. “Except as specifically provided by law, no officer or employee of the Postal Service shall make available to the public by any means or for any purpose any mailing or other list of names or addresses (past or present) of postal patrons or other persons.” Subsequently, in 1974, Congress passed the Privacy Act (5 U.S.C. 552a) to more broadly protect individuals from the unauthorized use of records that federal agencies maintain about them and to give them right of access to those records. Subsection (n) of this act also applies to address correction but, in contrast to related provisions of the 1970 Act, restricts certain uses of a name and address as follows: “An individual’s name and address may not be sold or rented by an agency unless such action is specifically authorized by law.” In 1991 and again in 1992, Congress held hearings addressing the privacy implications of the Postal Service’s address-correction services. These hearings focused on public concerns about the increasing volume of mail generated through the use of mailing lists, and raised questions about (1) the legality of certain Postal Service address-correction processes and (2) the adequacy of Postal Service oversight of the NCOA program to ensure compliance with the privacy provisions in federal law. A bill (H.R. 434) introduced in January 1995 would, among other provisions, allow any person notifying the Postal Service of a change of address to deny it permission to disclose such information. The objectives of our review were to determine (1) how the Postal Service collects, disseminates, and uses NCOA program data to provide mailers with accurate change-of-address information and (2) whether the Postal Service adequately oversees the release of NCOA data in accordance with privacy provisions of relevant federal laws. Because we were asked to review only the NCOA program, we did not review other Postal Service address-correction programs. To meet our first objective, we interviewed Postal Service headquarters officials in the Office of Address Management Systems, Operations Support Division, and officials and technical support staff at the National Customer Support Center and the NCOA program office in Memphis, TN. We also reviewed relevant records provided by these officials on the NCOA data gathering and dissemination process, including some correspondence from licensees on how they used NCOA data. To meet our second objective, we obtained and reviewed federal laws, legislative histories, congressional hearings, and other pertinent literature on privacy issues to better understand Congress’ concerns about U.S. citizens’ privacy rights and their relation to the name and address records the Postal Service uses to provide address-correction services. As we did in responding to objective one, we met with Postal Service representatives in Memphis to discuss and document how NCOA program oversight is maintained and what controls the Postal Service uses to ensure that the release of NCOA address information complies with applicable statutory constraints. Additionally, we reviewed files and other records of Postal Service NCOA program oversight activities; however, finding them to be incomplete, we relied more on information obtained from our interviews of Postal Service officials. Finally, we obtained written explanations from the Postal Service’s Chief Counsel for Ethics and Information Law regarding privacy issues pertinent to our second objective. To meet both objectives, we met with representatives of TRW Target Marketing Services, located in Allen, TX—which in 1994 was one of the NCOA program’s largest licensees in terms of volume of client address records processed. In this meeting, we obtained information and company views on how the NCOA program works, as well as on the Postal Service’s oversight of the program. On May 30, 1996, the Postmaster General provided written comments on a draft of this report, which are discussed beginning at page 20 and reprinted as appendix II. Our review was conducted from August 1994 through October 1995 in accordance with generally accepted government auditing standards. Since its implementation, the NCOA program has effectively reduced the volume of misaddressed mail processed through the Postal Service’s Computerized Forwarding System, according to the program manager. Before 1986, the volume of such mail was increasing annually, along with the overall volume of all mail. However, during the period in which the NCOA program has been operational, the volume of mail processed through the forwarding system has remained relatively constant, averaging about 2.4 billion pieces annually, while the total mail volume has continued to increase—by about 27 percent from late 1985 to 1995. The address-correction process begins when a postal customer submits a signed Change of Address Order (Postal Service Form 3575) to a local post office to have mail forwarded. (See app. I for a copy of the July 1995 form). Post office employees are to verify that the form is complete and then pass it on to one of 212 Computerized Forwarding System units located in the United States and Puerto Rico. These units are to convert the data to electronic form for use in the mail forwarding process and in the NCOA program. Using the completed change-of-address form, the Postal Service follows a policy of forwarding first-class mail to new addresses for 1 year. Although filing a change-of-address order is voluntary, customers who want their mail forwarded after moving must submit the form and must accept that the Postal Service will further disseminate the new addresses to commercial mailing list holders through the NCOA program. Each workday, the National Customer Support Center collects change-of-address data from the forwarding units. These data are then to be standardized into the Postal Service’s “preferred address with ZIP + 4 code” format and used to update a centralized database of change of address records—i.e., the master NCOA file. This file contains more than 110 million permanent change-of-address records. It covers the most recent 36-month period based on the move dates that customers report. Newly reported moves are to be added and those dated over 36 months are to be deleted biweekly. The computer programs used to maintain the master NCOA file and all data released from it are to be controlled by the National Customer Support Center. The Postal Service has licensed, for a fee paid by the licensees, the master NCOA file to a limited number of companies, which in turn use the file to correct addresses on their mailing lists and sell address-correction services to other businesses. As of December 1995, 24 companies were licensed, including some of the nation’s largest firms in the direct marketing and credit reporting industries, such as Donnelly Marketing, Inc., a leading direct mail marketing company; TRW Target Marketing Services, which operates primarily in the direct marketing industry; and Metromail Corporation, which primarily provides address services for direct marketing purposes. In 1995, 22 companies each paid $80,000, and the remaining 2 each paid $120,000, to the Postal Service under the licensing agreements. Each licensee is responsible for maintaining a complete and current NCOA file. Every 2 weeks, the NCOA program office within the National Customer Support Center is to provide licensees with a copy of the NCOA file update tapes, which on average contain about 1.1 to 1.5 million change-of-address records. Licensees are to use these tapes, which include address deletions, additions, and changes, to update the NCOA files they maintain. Licensees then use the NCOA files and address-matching logic designed into their computer software to update addresses on their customers’ mailing lists as well as their own mailing lists. Since all records in the NCOA file are to be in the Postal Service’s standardized address format, licensees must convert customers’ mailing lists, to the extent possible, into the same standardized address format before any matching occurs. This initial step may also identify and correct incomplete or inaccurate addresses on the licensee’s list. The resulting standardized lists are to be matched with the NCOA file by the licensees using address-matching computer software tested and approved by the NCOA program office, as required by the Postal Service. Each licensee’s software must meet the performance standards specified in the licensing agreement, and only approved software may be used to provide NCOA services. Under these procedures and conditions, each licensee is to update an address on a mailing list only when a name and address on that list matches a name and old address in the NCOA file. Licensees are to provide their customers the original address as it was presented on each customer’s list; the standardized address, including the correct ZIP + 4 code; and a new address where a match was found. When a match is found and a new address is disclosed, licensees may also disclose other information, such as whether the address is for a family, individual, or business and when the move became effective. Postal Service officials said they believe that the design and implementation of the NCOA program fully complies with applicable federal privacy laws. Postal Service officials said that they analyzed federal privacy laws and that releasing the NCOA file to licensees to provide address-correction services and licensees’ subsequent release of new addresses of postal customers—whose names and old addresses are already on a licensee’s or its customer’s lists—are lawful when done in accordance with the provisions and conditions of the licensing agreement. In a July 12, 1995, letter to us, the Postal Service’s Chief Counsel for Ethics and Information Law said that disclosure of the NCOA file to the licensee is supported by subsection (m)(1) of the Privacy Act. He said that because licensees act as representatives of the Postal Service when performing the list correction function, disclosure to the licensees does not constitute disclosure to the “public” within the meaning of section 412 of the Postal Reorganization Act of 1970. Furthermore, the Chief Counsel said that release of the information by a licensee to its customer for the limited purpose of list correction is permissible routine use. Postal Service officials emphasized that the Postal Service does not provide names to be included on any lists, whether held by its licensees or by their customers. Postal Service officials said that the information provided to licensees, and by licensees to their customers, under the NCOA program is limited to the new addresses of persons whose names and addresses are already on the licensee’s or the customer’s list. Thus, Postal Service officials said they believe that the NCOA program does not violate the prohibition in the Privacy Act against the unauthorized disclosure of an individual’s “name and address.” Postal Service officials said they believe that the NCOA licensing agreement, with its conditions and performance provisions, helps to ensure that federal privacy guarantees are not compromised through the operation of the NCOA program. The licensing agreement requires licensees to provide mailing-list correction services according to standards set by the Postal Service, and specifies licensees’ obligations under the Privacy Act. Postal Service officials said they believe that the prescribed standards for licensee performance provide the Postal Service a basis for monitoring performance to ensure the quality of the service provided and compliance with the privacy restrictions of federal law. For example, the agreement sets minimum standards for the performance of the computer software that licensees use to provide the NCOA service. It also establishes requirements for maintaining a current NCOA file, for timeliness of the service, and for safeguarding the NCOA file and the lists that customers submit for the address-correction service. The licensing agreement specifies the Privacy Act restrictions that Postal Service officials said they believe apply to the release and use of NCOA address information. The agreement states that the NCOA file is a system of records, as defined in subsection (a)(5) of the Privacy Act, and is subject to its provisions. It states that if, at any time during the term of the agreement, the licensee fails to comply with or fulfill any of the terms or conditions of the agreement, the Postal Service may, at its discretion, terminate the agreement. The agreement prohibits licensees from disclosing or using the information in the NCOA file for any purpose other than correcting addresses on preexisting lists. Licensees are required to institute procedural and physical safeguards to ensure the security of the information in the NCOA file, as well as to maintain an accurate accounting of all disclosures of information in the file in accordance with subsection (c) of the Privacy Act. The agreement points out that the Postal Service may conduct impromptu audits to evaluate the potential for unauthorized access, disclosure, or misuse of the NCOA file, as well as to ensure that all performance requirements are met. The agreement also points out that the licensee and its employees are subject to the criminal penalties set out in subsection (i)(1) of the Privacy Act for any willful disclosure prohibited by the act. After the congressional hearings held in 1991 and 1992, mentioned previously, the Postal Service modified, in May 1994, certain provisions of the licensing agreement. The Service took steps to clarify the licensing agreement restrictions on the use of NCOA data, strengthen its oversight of licensee performance, and provided for suspending any licensee who fails to comply with the terms and conditions of the agreement. As modified, the agreement specifies certain practices that are prohibited, such as the creation of new-movers lists. The use of new-movers lists is reportedly an important and common practice in the mail marketing industry. New-movers lists can be created by updating an existing list of names and addresses using NCOA data or other sources of current-address data. Individuals on the existing list whose addresses have changed are considered to have “moved,” and the names and new addresses of these individuals can be used to create or supplement a new-movers list. These lists can be used by list holders for their marketing purposes—e.g., to offer products or services to anyone who moves into a new home, or they can be sold to others. “Licensees, as well as their customers, hold mailing lists which are their intellectual property. We believe that by availing themselves of the NCOA and other services, those lists are legally and properly updated and that our management of these services fully comports with all of the laws which you have listed, as well as any others which may exist. “The simple fact of the matter is that once a list holder has acquired a corrected address through address correction service, we do not believe it is the intent of the law, nor do we believe it is the role of the Postal Service to attempt to police how the private sector uses their own intellectual property for their business reasons.” This position, however, is contrary to the conclusion reached by the Committee on Government Operations in its November 24, 1992, report (House Report 102-1067) following the 1992 congressional hearings. The Committee found that the NCOA program contravenes section 412 of the Postal Reorganization Act and subsection (n) of the Privacy Act. Among its other reasons for this conclusion, the Committee focused on the creation and sale of NCOA-linked new-mover lists by licensees as violating the restrictions imposed by the Privacy Act. “The sole purpose of this license and of the standardized name and address matching services is to provide a mailing list correction service for lists that will be used for the preparation of mailings. Information obtained or derived from the NCOA file or service shall NOT be used by the Licensee, either on its own behalf or knowingly for its customers, for the purpose of creating or maintaining “new-movers” lists. “As with the NCOA file itself, no proprietary Licensee list, which contains both old and corresponding new address records, if it is updated by use of the NCOA file, shall be rented or sold or otherwise provided, in whole or in part, to Licensee customers or anyone else.” Postal Service officials said that the above prohibition was not new but, rather, that the above language clarified restrictions that were already stated in broader terms in the original licensing agreement. However, the statement that the prohibition was not new appears to be contrary to the testimony quoted previously from the May 1992 congressional hearings. Further, in the May 1994 modification, the Postal Service imposed new requirements to limit the use of NCOA-linked data by the customers of licensees. However, in contrast to the modification provision applicable to licensees, this new requirement does not state explicitly that the prohibition on the use of NCOA-linked data to create new-movers list applies to the licensees’ customers. Specifically, the Postal Service added a requirement to the licensing agreement that, at least once each year, licensees are to have customers sign an “NCOA Processing Acknowledgement Form.” By signing the form, customers acknowledge their understanding that “the sole purpose of the NCOA service is to provide a mailing list correction service for lists that will be used for the preparation of mailings.” However, the form is not clear as to any specific prohibitions on the use of NCOA services by licensees’ customers because the form does not explicitly state that NCOA data are not to be used to create or maintain new-movers lists. Postal Service officials said they continue to believe that neither the Privacy Act nor the Postal Reorganization Act of 1970 limit in any way licensees’ and customers’ use of address data that have been properly updated or corrected through the NCOA service. The Manager, Address Management said that the change to the licensing agreement cited above was made as a “good business practice” to address concerns raised by Congress and the public in the 1992 congressional hearings. We do not question the Postal Service’s view that the disclosure of NCOA data to licensees for the specific and limited purpose of address list correction is permitted under the Privacy Act and the 1970 Act. However, we do not agree that the Privacy Act allows licensees to use NCOA-linked data to create new-movers lists, which may then be sold to their customers. As the Postal Service acknowledges, under the Privacy Act (5 U.S.C. 552a (m)(1)), the NCOA licensees operate on behalf of the Postal Service. As such, they are subject to provisions of the Privacy Act that allow an agency record to be disclosed provided that it is used for a purpose compatible with the purpose for which it was collected. Like the Postal Service, licensees may use the information disclosed only for the limited purpose of address-list correction, which is the routine use and purpose for which the Postal Service collected such information. Thus, in our view, use of NCOA-linked data by a licensee for the purpose of creating a new-movers list would not be consistent with the limitations imposed by the Privacy Act. In addition to the above changes, the May 1994 modification called for increasing the frequency of Postal Service audits that licensees must pass, from one to at least three each contract year. The modification further added the alternative of suspending a licensee for failure to comply with the terms and conditions of the agreement pending verification that the deficiencies have been corrected. Previously, the agreement provided only for the outright termination—at the Service’s discretion—of a licensee who failed to comply with provisions of the agreement. The Postal Service’s oversight fell short of ensuring that licensees have met the provisions and conditions of the licensing agreement and, thus, did not ensure that the NCOA program was operating in compliance with federal privacy laws. The Postal Service’s oversight procedures and processes have been weak with regard to (1) “seeding” the NCOA files with fictitious records to discourage unauthorized name and address disclosure by licensees; (2) auditing the performance of licensees’ NCOA software and conducting impromptu site visits to monitor whether licensees are complying with various licensing agreement requirements; (3) reviewing licensees’ proposed advertisements for NCOA services they sell; and (4) investigating NCOA program-related complaints. Our review of files and discussion of the seeding process with NCOA program officials disclosed certain management practices and inattention to procedure that, we believe, have limited the value of seeding as a control to ensure against the improper disclosure of NCOA data. Seeding is commonly used in the mailing industry to control proprietary records. The Postal Service periodically plants “seed” records when updating licensees’ NCOA files. A seed record is any nonmatch data placed in the NCOA file by the Postal Service and so designed that it will be released to mailing list holders only through improper use of the NCOA file. Licensees are aware that the NCOA files are seeded by the Postal Service, but according to NCOA program officials, specific seeding data are guarded against disclosure to licensees and the public. Postal Service officials said they believe that, if a licensee disclosed information from the NCOA file by any means other than through the approved computer software, fictitious seed address records would also be disclosed. Mail sent to seed record addresses would then be retrieved by the NCOA program office, alerting it to a possible improper disclosure of NCOA information. The NCOA program office would then trace the seed record back to the licensee who released it, and the Postal Service would take disciplinary and/or corrective action. The NCOA program manager reported that he was not aware that any seed mail had ever been received. Program officials told us that they had seeded NCOA files since the program began but had not retained historical records of seeding for the complete period. Available documentation of seeding activities began with the NCOA file update in July 1990. Our review of this documentation and information provided by program officials disclosed several weaknesses in the seeding process and documentation of the process as an NCOA program control measure. From July 1993 to April 1994, the NCOA files contained no seed records because the program office neglected to replace those records when they became 36 months old and were deleted. Seed records loaded in July 1990 were deleted in July 1993, some 36 months later. Seed records were not replaced in licensees’ NCOA files until April 1994. Program officials were not aware of this gap in seed coverage until our review. They said the gap was a “technical” error that was not particularly serious because the main value of seeding as a control comes from the licensees’ awareness that the Postal Service seeds the NCOA files. Program officials said they did not believe that licensees were aware that the gap in seed coverage had occurred. Program officials told us that before November 1994, the program office used only seed records unique to each licensee. All name and address updates to the licensees’ NCOA files by the Postal Service were identical, except for seed record names and addresses unique to each licensee. Program officials said they believed that this feature would enable them to trace any mail received at seed addresses to the licensee who released the record. However, it is possible that seed records could be identified and neutralized by two or more licensees who agreed to compare their NCOA files. After we discussed our concerns with NCOA program officials, in November 1994, the program office began using some “common” seed records. Under this new feature, a quantity of identical seed records are introduced into the NCOA files of all licensees, along with some seed records that are unique to each licensee. Although this procedure may help to identify any improper disclosure of addresses by licensees, it will not allow the Postal Service to identify which licensee was responsible for the impropriety if licensees compared their NCOA files to identify unique seed records because all licensees will have had access to a common seed record. The Postal Service process for seeding, identifying, and responding to mail that might be sent to a seed address was informal. There were no written procedures on the seeding process, the process for retrieving mail sent to seed addresses, or the process for investigating mail sent to seed addresses and then reporting the results of the investigation internally. The National Customer Support Center manager stated that the informal mail retrieval process was tested in 1990 and again in 1992. He said that the test results showed that this process worked in that test mail sent to the seed addresses through the regular mail stream was properly forwarded back to the NCOA program office. However, the manager told us that there was no record of these tests and that the results were not reported within the Postal Service. He said that procedures were revised in January 1995 to specifically cover what postal field personnel are required to do when they identify mail to be delivered to seed addresses. On the basis of our examination of poorly maintained audit files and subsequent discussions with NCOA program officials, we were unable to (1) confirm that we had identified all Postal Service audits of licensees or (2) fully assess the Postal Service’s management of audits. However, on the basis of our review of the records available and on interviews with program officials and staff, we question whether the licensee audits, as administered by the program office, provided a meaningful oversight measure of compliance with the applicable privacy provisions of federal law. During most of the program’s history, unannounced on-site audits were to be conducted annually at the licensees’ facilities. These audits were to include tests of licensees’ NCOA software accuracy and verification of licensees’ compliance with other licensing agreement provisions, such as the provision to prevent unauthorized access to the NCOA file. Under the licensing agreement, the Postal Service allows a licensee that fails an audit 30 days to correct the problem and be retested. This period is to begin when the Postal Service’s contracting officer notifies the licensee of the audit results. In 1992, the program office introduced an “automated” audit administered through a test tape mailed to each licensee. According to the program manager, the automated audit focused on a more comprehensive assessment of the accuracy of the licensees’ NCOA software. The audits are designed to detect both the failure of licensees’ NCOA software to make appropriate matches and instances of incorrect matches. Matching of names and addresses results in the release of new addresses to the mailing list holders and, eventually, into the mail stream. Incorrect matches, therefore, are more serious because they can result in the release of new addresses in violation of federal privacy laws. The Postal Service has set a high standard for the performance of licensees’ address-matching software. The licensing agreement specifies that a licensee’s address-matching software must achieve a 99-percent matching accuracy rate. That is, the software may produce no more than one error per 100 name and address matches as analyzed and scored by the program office. In May 1994, the Postal Service significantly modified the licensing agreement to, among other things, strengthen the Service’s oversight of licensees through audits. Before this modification, NCOA program officials said that licensees were audited at least once a year and that the only option available to the Postal Service under the licensing agreement was to terminate the license of a licensee who failed successive audits. The modification requires licensees to pass at least three audits each contract year and provides the option of either suspending or terminating licensees that fail two consecutive process audits or that fail to comply with other terms or conditions of the licensing agreement. Further, the modification requires the Postal Service to terminate the license of any licensee that fails three consecutive audits. Since 1992, the NCOA program office has maintained a separate file on each licensee containing various items of correspondence, internal memorandums, notes, and other information relating to process audits performed. We reviewed the files for details of process audits conducted during 1992 and 1993. The files we reviewed, however, generally did not contain complete records of the audits performed, audit results, or resolution of audit findings. We were able to ascertain from the files, however, that in 1992 at least 65 automated audits were made of the 25 firms licensed at that time to provide NCOA services. All but one licensee failed the initial audit. Seven licensees passed the first follow-up audit. Another seven licensees failed the first follow-up audit but passed a second follow-up audit. However, 10 licensees failed all automated process audits performed that year. The Postal Service did not terminate the license of any of the 10 licensees who failed successive process audits during 1992. In fact, these licensees continued to provide NCOA services with address-matching software that had failed repeatedly to meet the performance standards for accuracy required by the licensing agreement. For example, four licensees failed an initial audit in May 1992, and then failed two follow-up audits, before finally passing an audit conducted in March 1993. However, these same licensees were allowed to continue providing NCOA services during the 10-month period in which their software failed to meet the Service’s minimum standard for accuracy. The NCOA program manager explained that the pattern of repeated audit failures resulted from the increased thoroughness, coverage, and focus on software accuracy of the new automated process audit as compared with earlier process audits. He acknowledged that program oversight had not been carried out as strictly as it could have been because program officials did not want to terminate licensees from the program, which was the only option available under the licensing agreement at that time. The program manager believed that the Postal Service correctly opted to work with the licensees to resolve the software deficiencies identified in the 1992 audits. He indicated that, among other things, most of the software performance errors involved failures to make any matches rather than making inappropriate matches. He also said that the program office staff responded promptly to ensure that licensees corrected software weaknesses identified in the audits, which may have affected compliance with federal privacy laws. During 1993, the Postal Service audited the 10 licensees who failed all audits conducted during 1992. Each of these 10 licensees passed the 1993 audit. The NCOA program manager explained that other licensees were not audited during 1993 because, starting in about March of that year, the entire master NCOA file was redesigned, and licensees had to change their software to accommodate this redesign. Further, the NCOA program office had a contract with one of its NCOA licensees for computer support to build and maintain the master NCOA file. The program office brought this function in-house in October 1993. Consequently, according to the program manager, all program staff who would have done the licensee audits were instead used to support this transition and maintain the NCOA file. We were unable to completely evaluate this oversight activity because the NCOA program office did not have historical records of any advertisements either submitted or reviewed. However, the information that we were able to obtain indicated that the program office was not effectively overseeing licensees’ advertising activities. Specifically, we found that although at least two licensees had advertised NCOA-linked new-movers lists and had submitted these advertisements to the Postal Service for review, no action had been subsequently taken by the Postal Service to disapprove the advertisements. The May 1994 modification stated that a licensee’s advertising will be disapproved if it includes any reference to NCOA or the Postal Service. The licensing agreement requires licensees to submit all proposed advertising and methods of selling NCOA program-related services to the NCOA program office for review and approval. The purpose of this requirement is to ensure that licensees’ customers are not misled by the advertising or sales methods used, as well as to specifically ensure that the relationship between the Postal Service and the licensee is correctly represented. The licensing agreement states that the Postal Service will provide the licensee with a written response on the acceptability of proposed advertising within 20 working days of receipt of the material. However, if the licensee does not receive a written response within this time, the agreement states that the licensee may consider the proposed advertisements or sales methods approved for use. The program manager told us that licensees had regularly submitted their proposed NCOA-related advertisements to the program office for review. However, our review of licensee contract files and discussions with a licensee disclosed that at least two licensees had regularly submitted advertising materials for NCOA-linked new-movers lists for Postal Service review and approval and that the program office had not responded. For example, a May 19, 1994, letter from a licensee stated that it had regularly submitted for review copies of its advertisements promoting NCOA-linked new-movers lists since inception of the NCOA program but that the Postal Service had never responded. As noted earlier, Postal Service officials said that the change to the licensing agreement that specifically prohibited the creation of NCOA-linked new-movers lists was to make more explicit the existing restrictions on uses of NCOA data. Therefore, even before the licensing agreement was modified in 1994, the exercise of effective oversight should have dictated that the Service inform licensees who proposed advertisements promoting NCOA-linked new-movers lists that such advertisements were not permitted by the licensing agreement. However, the Postal Service failed to respond to these proposed advertisements. In discussing this issue with the program manager, we were told that, notwithstanding the advertisements submitted for review, the Postal Service had not fully understood how licensees were using the NCOA file—i.e., to create NCOA-linked new-movers lists. When it became clear that licensees were creating such lists, the licensing agreement was modified to specifically (1) preclude licensees from creating and maintaining new-movers lists for either their own use or the use of their customers and (2) state that a licensee’s advertising will be disapproved if it includes any reference to NCOA or the Postal Service anywhere in any text or graphics that include a reference to nonmailing products and services, such as new-movers lists. Another oversight or control mechanism over licensees that the Postal Service reportedly uses is the investigation of NCOA-related complaints emanating from the public, the licensees themselves, or their customers. However, because the program office had no records of complaints received or related investigations, we could not assess the effectiveness of the complaint investigation process as a control mechanism. The NCOA program office’s complaint investigation process was informal and lacked structure. The office could provide us with no record of complaints received. Further, we found no evidence of a formal process for logging complaints, investigating complaints, and reporting the results of investigations internally or to complainants. According to the program manager, a few complaints had been received, which were mainly related to customer misunderstandings about the NCOA-related services that licensees provide. In establishing the NCOA program, the Postal Service took a positive step toward dealing with the inefficiencies of processing misaddressed mail. In setting up and using a nationwide database of postal customer names and addresses to provide this address correction service, the Postal Service has tried, primarily through changes to licensing agreements, to create controls that help ensure that the release and use of NCOA information complies with the provisions of federal privacy laws. The Postal Service said it believes that it has met its legal responsibilities through program design and oversight. However, at the time of our review, the NCOA program was operating without clearly delineated procedures and without sufficient management attention to ensure that the program was operating in compliance with the privacy provisions of federal laws. Specifically, the Postal Service lacked adequate written procedures and oversight processes regarding seeding the NCOA files with fictitious records to discourage unauthorized name and address disclosure by licensees; obtaining and reviewing, in a timely manner, licensees’ proposed advertisements that mention the NCOA program, taking prompt action to disapprove inappropriate advertisements, and documenting the results; and documenting all NCOA-related complaints received and actions taken to address the complaints. The NCOA program office’s absence of written procedures and inattention to processes allowed seeding control features to lapse for a 9-month period before the condition was discovered and corrected. Also, several licensees had advertised NCOA-linked new-movers lists, submitted the advertisements to the Postal Service for review, and yet the Postal Service had taken no action to disapprove the advertisements. Further, with regard to complaints, the NCOA program office had no records of complaints received or related investigations, although officials said that complaints had been received. The NCOA program office had not implemented and enforced some provisions of the licensing agreement, including those requiring a minimum number of licensee audits each year and the termination of licensees that failed to maintain address-matching software that meets the performance standards prescribed in the license agreements. Ten licensees failed successive audits of their software and continued to provide NCOA services in 1992. When licensees’ software does not perform according to the standards, the Postal Service cannot be sure that the NCOA program is operating in compliance with federal privacy laws. Finally, we found that the Postal Service had not clearly communicated, through licensees, to licensees’ customers, the restrictions on the use of NCOA data to create or maintain new-movers lists. That is, the Postal Service had not explicitly stated in the acknowledgment form—to be signed by customers of licensees—that NCOA data are not to be used to create or maintain new-movers lists, a restriction that the Service has communicated to licensees. To strengthen oversight of the NCOA program, we recommend that the Postmaster General require the NCOA program office to develop and implement written oversight procedures, which should include (1) the responsibilities and timetables for using seed records to help verify that licensees release new addresses only as a result of accurate name and address matching; (2) requirements to obtain and review licensees’ NCOA-related proposed advertisements, document the review, and notify licensees of the results within the time period prescribed in the licensing agreement; and (3) requirements for systematically recording all NCOA-related complaints received, including actions taken to resolve complaints; and in addition, enforce all provisions of the licensing agreement, including (1) conducting at least the prescribed minimum number of licensee audits, currently three per contract year; and (2) suspending or terminating, as appropriate, licensees that fail two consecutive audits or that are determined to be in noncompliance with other terms or conditions of the licensing agreement. (As provided in the agreement, licensees that fail three consecutive audits should be terminated.) We also recommend that the Postmaster General further restrict the use of NCOA-linked data to create or maintain new-movers lists by explicitly stating it on the acknowledgment form that is signed by customers of NCOA licensees. In a May 30, 1996, letter (see app. I) the Postmaster General commented on a draft of this report. He said that the Postal Service had implemented our recommendations to develop written oversight procedures for conducting NCOA seeding operations, reviewing and responding to NCOA-related advertisements, and investigating complaints about the program. He said also that the Postal Service was pleased that we did not question the lawfulness of licensing NCOA data for the purpose of address-list correction. It is important to note that, while we did not question the legality of the Postal Service’s arrangements with licensees to provide address list correction services, we disagree with its view that the Privacy Act allows licensees to use NCOA-linked data to create new-movers lists. The Postal Service did not adopt our recommendation that restrictions on the use of NCOA-linked data to create or maintain new-movers lists be included in the acknowledgment form that is to be signed by NCOA licensees’ customers. The Postal Service primarily provided three reasons for its decision to not adopt our recommendation, which are summarized below along with our evaluation. First, the Postal Service said it does not believe that a restriction on the creation and maintenance of new-movers lists from NCOA-derived data is required by privacy law. For the reasons stated earlier in this report, we continue to believe that use of NCOA-linked data by a licensee for creating a new-movers list would not be consistent with the limitations imposed by the Privacy Act. The Postal Service did not provide any new evidence or rationale for its view that the Privacy Act permits licensees to use NCOA-derived data for purposes other than address-list correction, which is the routine use or purpose for which the Postal Service collected such information. Second, the Postal Service said that effective enforcement of such a restriction on customers of licensees would be impracticable. The Postal Service said that the Privacy Act does not govern the private sector and provides no basis for requiring the Service to control what the private sector does with address corrections legitimately obtained from the Postal Service. The Postal Service said it believes that it would be inappropriate to place limitations on licensees’ customers, with whom the Service has no formal relationship. Regarding this second point, we recognize that enforcement of the restrictions on third parties, i.e, licensees’ customers, might be difficult because the Postal Service has no contractual relationship with licensees’ customers. However, we do not believe that a potential difficulty of enforcing such restrictions under arrangements made with licensees means that the Postal Service should not clearly communicate what those restrictions are. NCOA licensees operate on behalf of the Postal Service and are subject to the same provisions of the Privacy Act as the Service, which allows an agency record to be disclosed provided the record is used for a purpose compatible with that for which it was collected. These records were collected by the Postal Service for address-list corrections, not to create new-movers lists. As a practical matter, it appears that the Postal Service could, at a minimum, communicate through licensees to the licensees’ customers any restrictions on the use of NCOA data to create or maintain new-movers lists. Acting on behalf of the Postal Service, licensees could help ensure compliance with the restrictions by explaining to their customers the limitations on the release and use of NCOA data under the Privacy Act. Unless the Postal Service implements and attempts to enforce these limitations, it cannot ensure that use of NCOA-derived data is limited to the purpose for which it was gathered. Third, the Service said that we misinterpreted the purpose of the acknowledgment form when we said that it was “to limit the use of NCOA-linked data by the customers of licensees.” The Service said that the purpose of the form is to ensure that lists presented to licensees for correction are really mailing lists. The acknowledgment form states that the sole purpose of the NCOA service is to provide a mailing-list correction service for lists that will be used to prepare mailings. We believe that this language does limit the use of NCOA-linked data. However, the Postal Service had not explicitly stated in the acknowledgment form the specific restriction that it communicated to licensees, namely, that NCOA data are not to be used to create or maintain new-movers lists. We are recommending that the Postmaster General explicitly state this restriction on the acknowledgment form. Also, the Postal Service said that it has never acknowledged that the creation of new-movers lists by customers is prohibited. We clarified in our report that the Postal Service had communicated the prohibition on the creation of new-movers list to licensees—but not to their customers. We are sending copies of this report to the Ranking Minority Member of this Subcommittee, the Postmaster General, and other interested parties. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix III. If you have any questions about the report, please call me on (202) 512-8387. (Front of form) (Back of form) Sherrill H. Johnson, Core Group Leader Robert T. Griffis, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO examined the U.S. Postal Service's oversight of the National Change of Address (NCOA) program, focusing on: (1) how the Postal Service collects, disseminates, and uses NCOA data; and (2) whether the Postal Service adequately oversees the release of NCOA data in accordance with privacy laws. GAO found that: (1) the Postal Service uses 24 licensees to collect and disseminate address-correction information; (2) the licensees provide address services to other private firms and organizations in accordance with standard licensing agreements; (3) the Postal Service has been unable to prevent, detect, or correct potential breaches in the licensing agreement; (4) the Postal Service audits the software that licensees use to match their mailing lists with NCOA files, reviews NCOA advertisements that licensees propose to use, and investigates complaints concerning the NCOA program; (5) Postal Service officials believe that the NCOA licensing agreement helps to ensure that federal privacy guarantees are not compromised through the operation of the NCOA program; (6) the Postal Service has not expressed a clear and consistent position regarding the use of NCOA data to create new-movers lists; (7) the Postal Service failed to terminate the license of any licensee that failed successive process audits in 1992; (8) the NCOA program office is not terminating licensees that fail to maintain address-matching software or enforcing the performance standards prescribed in the license agreements; and (9) the Postal Service needs to enforce these limitations to ensure that the use of NCOA-derived data is limited to the purpose for which it was intended.
|
The F-35 program is a joint, multinational acquisition to develop and field an affordable, highly common family of stealthy, next-generation strike fighter aircraft for the United States Air Force, Marine Corps, Navy, and eight international partners. The JSF is a single-seat, single-engine aircraft incorporating low-observable (stealth) technologies, defensive avionics, advanced sensor fusion, internal and external weapons, and advanced prognostic maintenance capability. There are three variants. The F-35A conventional takeoff and landing (CTOL) variant will provide air-to-ground attack capabilities to replace the Air Force’s F-16 Fighting Falcon and the A-10 Thunderbolt II aircraft, and will complement the F-22A Raptor. The F-35B short takeoff and vertical landing (STOVL) aircraft will be a multi- role strike fighter to replace the Marine Corps’ F/A-18C/D Hornet and AV- 8B Harrier aircraft. The F-35C carrier-suitable variant (CV) will provide the Navy and Marine Corps a multi-role, stealthy strike aircraft to complement the F/A-18E/F Super Hornet. The JSF is DOD’s largest cooperative program. Our international partners are providing about $5.1 billion toward development, and foreign firms are part of the industrial base producing aircraft. DOD’s funding requirements for the JSF assume economic benefits from partner purchases in reducing unit costs for U.S. aircraft. JSF concept development began in November 1996 with a 5-year competition between contractors to determine the most capable and affordable preliminary aircraft design. Lockheed Martin won the competition and the JSF program entered system development and demonstration in October 2001. Pratt and Whitney is the primary engine manufacturer, while General Electric has been developing a potential second source for the engine. System integration efforts and a preliminary design review then revealed significant airframe weight problems impacting key performance requirements. In March 2004, DOD rebaselined the program, adding time and money for development and delaying key milestones. The Navy and Marine Corps also reduced their planned procurement by 409 jets, reducing the total U.S. buy to the current 2,457 quantity. The program was again rebaselined in March 2007 to reflect additional cost increases and schedule slips and the procurement period was extended by 7 years to 2034 with reduction in annual quantities. Because of continuing problems and poor outcomes, the Secretary of Defense announced another comprehensive restructuring of the JSF program in February 2010. The restructuring followed an extensive Department-wide review initiated in 2009 and considered the findings and recommendations from three independent groups chartered to assess the program: the Joint Estimating Team (JET) evaluated program execution and resource requirements; the Independent Manufacturing Review Team (IMRT) assessed contractor capabilities and plans for ramping-up and sustaining production at maximum rates; and the Joint Assessment Team (JAT) reviewed engine costs and affordability initiatives. Key restructuring actions included adding $2.8 billion for development, extending flight testing by 13 months, adding flight test resources (one new test jet and use of 3 production jets), reduced near-term procurement by 122 aircraft, and review of the military services’ capability need dates. The Under Secretary of Defense for Acquisition, Technology and Logistics stated that the department-wide review would continue under new program management and cited 2010 as a critical year for assessing progress against the new plans and the expected delivery of all test aircraft, completion of hundreds of test flights, and meeting other key milestones. We supported these actions in our March 2010 report and subsequent testimonies. We noted the likelihood of additional cost growth and schedule extensions as the restructuring continues. In March 2010, the Department declared that the program experienced a breach of the critical cost growth statutory thresholds. The Department subsequently certified to Congress in June 2010 that the JSF program should continue. Table 1 summarizes the evolution of JSF cost and schedule estimates at key junctures in its acquisition history through the current Nunn-McCurdy certification. Since then, in January 2011, the Secretary of Defense announced additional development cost increases and further changes consequent to the ongoing restructure, but has not yet established a new approved acquisition program baseline. Ongoing JSF restructuring continues to add more cost and time for developing, testing, and delivering aircraft to the warfighter. These actions, effectively implemented, should result in more predictable and achievable program outcomes, but restructuring comes with consequences—higher upfront development costs, fewer aircraft received in the near term, training delays, and extended times for testing and delivering the capabilities required by the warfighter. Affordability for the U.S. and our allies is challenged because unit prices are about double what they were at program start and with new forecasts that the aircraft may cost substantially more to operate and maintain over the life cycle than the legacy aircraft they replace. Going forward, the program requires unprecedented levels of funding in a period of more austere defense funding. Defense leaders stated that the JSF program lost its focus on affordability and that restoring the focus is paramount to improving program outcomes. Defense leadership continued to restructure the JSF program following the Nunn-McCurdy certification. In January 2011, the Secretary of Defense directed additional changes, stemming in large part from the results of a comprehensive technical baseline review under new government and contractor management. Key program changes (1) added $4.6 billion to the development program through completion for a total development program estimate of $56.4 billion (an increase of 26 percent against the current baseline and 64 percent from the original baseline at program start); (2) extended the development test period to 2016 (a 4-year slip from the current baseline); and (3) reduced near-term procurement quantities by 124 aircraft in addition to the 122 aircraft cut announced in February 2010; and (4) lowered the annual rate of increase for boosting future production. Because of the lingering technical issues on the STOVL, the most complex variant, the Secretary decoupled STOVL flight tests from the combined test plan and scaled back STOVL production to only 3 in fiscal year 2011 and to 6 per year for fiscal years 2012 and 2013. This represents a total cut of 37 STOVL aircraft during this 3-year period compared to the fiscal year 2011 budget plans. In announcing these changes, the Secretary also noted the STOVL’s significant testing problems which include lift fan engine deficiencies, and poor durability test results, which could require redesigns and add weight to aircraft’s structure and propulsion system. While the Secretary decoupled STOVL from the flight test program, STOVL was not further separated from the rest of the JSF program for management and reporting activities. It remains a part of the combined JSF program for milestone decisions and cost, schedule, and performance reporting. Resolving STOVL problems and moving forward at an affordable cost is essential to the Marine Corps’ future plans, which depend upon acquiring the STOVL in quantity to directly accompany, protect, and provide firepower to its ground expeditionary forces. The recently submitted fiscal year 2012 Defense Budget reflects the financial impacts from restructuring actions through 2016. Compared to estimates in the fiscal year 2010 future years defense program for the same 5-year period, the Department increased development funding by $7.7 billion and decreased procurement funding by $8.4 billion reflecting plans to buy fewer aircraft. Table 2 summarizes the revised development and procurement funding requirements and annual quantities following the Secretary’s reductions. Even after decreasing near-term quantities and lowering the ramp rate, JSF procurement still rapidly increases. Annual funding levels more than double and quantities more than triple during this period. These numbers do not include the additional orders expected from the international partners. Additional changes to cost and schedule are likely as restructuring continues. At the time of this report, the Secretary had not yet granted new milestone B approval nor approved a new acquisition program baseline. Originally planned for November 2010, program officials now expect the new acquisition program baseline in late 2011. Also, cost analysts are still revising procurement funding requirements for the period fiscal year 2017 through completion of procurement in 2035. Accordingly, the net effect of reducing near-term procurement quantities and deferring these aircraft to future years is uncertain and depends upon the assumptions made about future unit prices, annual quantities, and inflation. We expect total procurement costs will be somewhat higher than the estimate submitted in the Nunn-McCurdy certification (refer to table 1). Reduced quantities and use of production aircraft in testing will also limit training activities for the near-term and delay deliveries of new capabilities to the warfighters. Officials now forecast that the completion of system development, completion of initial operational testing, and the full rate production decision will extend into 2018. This represents slips of about 5 years in these important milestones against the current program baseline approved in 2007. The military services are evaluating the impacts from restructuring on their initial operational capability (IOC) milestones, the critical need dates when the warfighter must have in place the first increment of operational forces available for combat. In response to the initial set of restructuring actions, the Air Force and Navy tentatively extended these milestones to 2016, but the Marine Corps slightly adjusted its IOC date by 9 months to December 2012. It is all but certain that the Marine Corps will be delaying its IOC date in the wake of the Secretary’s STOVL actions. Air Force and Navy dates may also be adjusted to reflect the newest developments. Affordability—both in terms of the investment costs to acquire the JSF and the continuing costs to operate and maintain it over the life-cycle—is at risk. A key tenet of the JSF program from its inception has been to deliver an affordable, highly common fifth generation aircraft that could be acquired by the warfighters in large numbers. Rising aircraft prices erode buying power and make it difficult for the U.S. and its allies to buy as many aircraft as planned. Quantity reductions could drive additional price increases for future aircraft. Further, while the Department is still refining cost projections for operating and supporting future JSF fleets, cost forecasts have increased as the program matures and more data becomes available. Current JSF life-cycle cost estimates are considerably higher than the legacy aircraft it will replace; this has major implications for future demands on military operating and support budgets and plans for recapitalizing fighter forces. Defense leadership stated that the JSF program lost focus on affordability and that restoring and maintaining that focus is paramount to improving program outcomes. In light of continued cost growth, the program places unprecedented demands for funding in the defense budget—an annual average of almost $11 billion for the next two decades. (This and other data in this paragraph reflect the fiscal year 2011 budget submission.) During the peak years of production, the average annual requirement is about $13 billion. The JSF will have to annually compete with other defense and nondefense priorities for the shrinking discretionary federal dollar amid continued concerns about the national debt and long term fiscal pressures. The JSF program has received more than $56 billion through fiscal year 2010. To complete the acquisition program as currently planned, another $272 billion will be required from 2011 through 2035. Figure 1 illustrates the annual funding requirements outlined in the program’s Selected Acquisition Report released in April 2010. These funding levels do not reflect the additional funding increases in the Nunn-McCurdy certification and the Secretary’s recent actions. DOD is in the process of establishing a new acquisition program baseline which will likely project even higher funding requirements. The JSF is the linchpin in DOD’s tactical aircraft recapitalization plans, replacing hundreds of legacy aircraft. Because of its sheer size and high priority within the Department, even relatively modest cost growth on the JSF can require the sourcing of billions of additional funds, largely from other programs in DOD’s acquisition portfolio. On the other hand, slips in JSF schedules, cuts in annual procurement quantities, and deferred delivery of operational aircraft can require additional monies be spent on legacy aircraft, postponing planned retirements and sustaining fleets for longer periods of time. To mitigate projected shortfalls in tactical aircraft inventories due to JSF perturbations, the Navy recently procured additional F/A-18E/F Super Hornets and both the Navy and Air Force are funding service life extension programs and adding new capabilities to legacy aircraft. Furthermore, international partners’ participation in the JSF program is very important to maintaining affordability for all buyers. DOD budget plans expect the partners to buy 223 aircraft costing $24.1 billion during the fiscal year 2011-2016 period. However, JSF cost increases, schedule delays, and internal issues may result in reduced or deferred foreign buys. Some partners have already signaled plans to buy fewer aircraft, a different mix of aircraft, or defer purchases to later years. On the positive side, other countries have expressed interest in acquiring the JSF. Decisions made by the international community and its impact on JSF affordability are largely beyond the program’s direct control. However, improving JSF program outcomes to lower costs and reassure buyers is within DOD’s and the contractors’ control. The eight international partners have important stakes in the JSF program, having provided about $5 billion in development funding, being counted upon to procure hundreds of aircraft, and expecting their industries to receive a significant portion of JSF manufacturing and supply business. DOD’s procurement cost estimates provided to the Congress have long assumed that the eight partners will buy at least 730 JSF aircraft. Unit prices for U.S. quantities assume the economic benefit of these purchases. If fewer are sold overseas, the Air Force, Navy and Marine Corps (and the American taxpayer) may have to pay more. Unit costs can be expected to increase with smaller purchases due to diminished manufacturing economies of scale and because fixed costs have to be spread over fewer aircraft. Maintaining a strong focus on affordability necessitates having reliable and complete cost data that provides accurate accounting reports, identifies potential cost and schedule problems early, and produces sound estimates of the cost to complete work. The JSF program has been hampered in this regard because, for at least the past three years, the prime contractor has not had an adequate and disciplined earned value management (EVM) system in place to effectively track costs and control schedule. The prime contractor was found deficient in meeting 19 of 32 required guidelines, calling into question its ability to manage the escalating costs and complex scheduling of the JSF program. In October 2010, the Defense Contract Management Agency (DCMA) withdrew the determination of compliance for the prime contractor’s EVM system due to longstanding non- compliance issues with specific guidelines that underpin a sound system. To address these shortcomings, the contractor is developing new processes, tools, training, and enforcement in order to achieve a fully integrated and automated EVM system. Officials will reassess the earned value system by March 2012—more than four years after these problems were first discovered to see if modifications needed have been made. EVM is an important, established tool that can provide objective product status reports. DOD requires its use by major defense suppliers to facilitate good insight and oversight of the expenditure of government dollars, thereby improving both affordability and accountability. JSF is DOD’s largest acquisition ever, so it is particularly critical to improve and certify the contractor’s EVM system as expeditiously as possible. If not improved, inaccurate performance reports and late notice of cost overruns will likely continue to hinder timely decision making and corrective actions. Strong leadership and a shared vision among stakeholders are critical to implementing EVM effectively. The JSF program established 12 clearly-stated goals in testing, contracting, and manufacturing for completion in calendar year 2010. It had mixed success, achieving 6 goals and making varying degrees of progress on the other 6. For example, the program exceeded its goal for the number of development flight tests but did not deliver as many test and production aircraft as planned. Also, the program awarded its first fixed-price contract on its fourth lot of aircraft production, but did not award the fixed-price engine contract in 2010 as planned. Table 3 summarizes JSF goals and accomplishments for 2010. The development flight test program significantly ramped up operations in 2010, accomplishing three times as many test flights as the previous 3 years combined. Table 4 summarizes actual flights, hours, and test points flown by each variant compared to the 2010 plan. Although still hampered as in prior years by the late delivery of test aircraft, flight tests substantially increased in volume and pace at the two main government test sites—Edwards Air Force Base, California, for CTOL tests and Patuxent River Naval Air Station for STOVL and CV testing. The CTOL variant significantly exceeded plans while initial testing of the carrier variant was judged satisfactory, below plans for the number and hours of flight but ahead on test points flown. The STOVL, however, substantially under-performed in flight tests and experienced significant technical issues unique to this variant that could add to its weight and cost. The STOVL’s test problems were a major factor in the heightened scrutiny and two-year probation period directed by the Secretary to engineer solutions, assess impacts, and inform a future decision as to whether and how to proceed with this variant. Evaluating annual performance against stated goals can be an effective tool that facilitates oversight by the Congress and defense leadership and useful for informing future budget decisions. In our 2010 report, we suggested that Congress consider requiring DOD to establish a system maturity matrix to better measure the program’s progress in maturing the weapon system and providing evidence to support budget decisions. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011 established this requirement and we understand the Department is working on its implementation. We believe this tool and process will improve oversight and budgeting, holding people accountable for meeting interim objectives and, for objectives not met, providing criteria and a forum for evaluating reasons why and what should be done. After completing 9 years of system development and 4 years of overlapping production activities, the JSF program has been slow to gain adequate knowledge that its design and manufacturing process are fully mature and ready for greater levels of annual production. The JSF program still lags in achieving critical indicators of success expected from well-performing acquisition programs. Specifically, the program has not yet stabilized aircraft designs—engineering changes continue at higher than expected rates long after critical design reviews and well into procurement, and more changes are expected as testing accelerates. Also, the aircraft and engine manufacturing processes are not yet mature enough to support efficient production at higher annual rates and substantial improvements in the global supply network are needed. Further, the growth in aircraft reliability—crucial for managing life-cycle costs—has not been demonstrated to the extent planned by this time. Engineering drawings released since design reviews and the number and rate of design changes are excessive compared to plans and best practices. Critical design reviews were completed on the three aircraft variants in 2006 and 2007 and the designs declared mature, but the program continues to experience numerous changes. Since 2007, the program has produced 20,000 more engineering drawings, a 50-percent increase in total drawings and about 5 times more than best practices suggest. In addition, changes to drawings have not decreased and leveled off as planned. Figure 2 tracks and compares monthly design changes and future forecasts against contractor plans in 2007. The monthly rate in 2009 and 2010 was higher than expected and the program now anticipates more changes over a longer period of time—about 10,000 more changes through January 2016. We expect this number to go up given new forecasts for additional testing and extension of system development until 2018. A key indicator of a product’s maturity is the stability of its design. The number of engineering drawings released and subsequent changes provide indicators of the maturity of the design. Engineering drawings are critical because they communicate to the manufacturer and suppliers how the part functions, what it looks like, and what materials and critical processes are used to build the product. Best practices suggest 90 percent of a product’s engineering drawings be released by the critical design review. Late engineering drawings and high levels of changes often indicate a lack of understanding about the design, and can cause part shortages and inefficient manufacturing processes as work is performed out of sequence. Some level of design change is expected during the production cycle of any new and highly technical product, but excessive changes raise questions about the JSF’s design maturity and its readiness for higher rates of production. With most of development testing still ahead for the JSF, the risk and impact from required design changes are significant. Acquisition programs typically encounter higher and more substantive changes as a result of discovery and rework during development flight and ground testing. Future changes may require alterations to the manufacturing process, changes to the supply base, and costly retrofitting of aircraft already produced and fielded. A key cost driver for the program has been the higher than expected effort needed to address design related issues. The contractor has not been able to reduce engineering staff as fast as expected. DOD’s restructuring actions recognize these issues and added time to development, more flight testing, and reduced procurement. Additional changes are likely as development flight testing continues. Some emerging concerns may drive additional and substantive design changes: JSF Lift System Development and Integration. Essential to STOVL operations, the lift fan continues to be a prime risk area. The program is working to mature lift fan and drive shaft technologies and a required redesign expected in spring 2011. Fatigue Cracks in STOVL Test Article. During a recent durability ground test, fatigue cracks were discovered in a major bulkhead of the STOVL test article. Cracks were discovered after 1,500 hours of durability testing, less than one-tenth of the hours planned for fatigue tests to certify that the STOVL airframe meets its design life requirement. Officials reported that stress data had been under- estimated during initial design. Inspections of aircraft and other test articles did not identify cracks at the same site. Decisions about potential redesign and re-manufacture are still to be determined. Wing Tip Vortex. Prime contractor officials identified wing tip vortices as a potential risk to the program. Wing tip vortices are tubes of circulating air which are left behind the aircraft’s wing as it generates lift. The cores of the vortices are sometimes visible because of water condensation. If these are visible during daytime flights they could negatively impact the aircraft’s stealth capabilities. Outer Mold Lines. Defense Contract Management Agency officials noted difficulties in manufacturing outer mold lines, resulting from tight tolerance specifications and multiple manufacturing methodologies among the different JSF parts suppliers. The manufacturing processes are new and different from legacy practices. Inability to meet the outer mold line requirements could have major impacts on cost as well as stealth requirements and capabilities. This problem is not expected to be resolved until the June 2015 time frame after which a large number of aircraft will have been built and would need to be retrofitted for any design changes. Program officials stated some redesign activities have begun and will take into the 2013 timeframe to begin developing the changes, their costs, and implementation. The effects of these changes could extend out into 2015, but will be prioritized to reduce performance and cost impacts. Manufacturing and delivering test jets took much more time and money than planned and the full contingent of test aircraft is still not available at military testing sites, years later than promised. Projected costs to complete the first three production lots for aircraft and engines also exceed the negotiated amounts at contract award and aircraft will be delivered late. The production impacts of restructuring actions that reduced quantities, lowered the ramp rate, and delayed the full-rate production decision have not been fully determined. We found that the aircraft and engine manufacturers are making good faith efforts to implement the IMRT and JAT recommendations and to make other improvements with performance measures indicating some success. As in prior years, lingering management inefficiencies, including substantial out-of-station work and part shortages, continued to increase the labor needed to manufacture test aircraft. Figure 3 depicts forecasted and actual labor hour requirements for building 12 production- representative test jets. Total labor hours required to produce the test aircraft increased over time. The 2010 actual labor hours exceeded the 2007 budgeted hours by more than 1.5 million hours, a 75 percent increase. Manufacturing production aircraft is different from building test aircraft, and some gains in learning as more aircraft are built can be expected to, over time, reduce labor hour costs. However, the experience to date on the test aircraft and initial production aircraft suggests that future costs for building production aircraft may be higher than currently budgeted. The costs on the first three low-rate production contracts have increased from amounts negotiated at contract award and the completion dates for delivering aircraft have been extended over nine months on average. We are encouraged by DOD’s award of a fixed price incentive fee contract for lot 4 production and the prospects for the cost study to inform lot 5 negotiations, but we have not examined contract specifications. DOD began procuring production jets in 2007 and has now ordered 58 aircraft on the first four low-rate initial production lots. JSF contracts anticipated the delivery of 14 production jets through 2010, but none have been delivered. Delivery of the first two production jets (both CTOLs) has been delayed several times since the contract was signed and is now expected in April 2011. In addition, DOD expects to procure 32 more aircraft in fiscal year 2011. Building a large backlog of jets on order but undelivered is not an efficient use of federal funds, tying up millions of dollars in obligations ahead of the ability of the manufacturing process to produce. We note that the Secretary used a similar line of reasoning to reduce STOVL production. DOD does not yet know the full effect that restructuring actions will have on future annual procurement funding requirements. Cost analysts are still calculating the impacts from deferring procurement of 246 aircraft from the near-term to future years, lowering the ramp rate, and extending the full-rate production decision. Future funding requirements could be even higher than projected and the quantities considered affordable by the U.S. and allies could be reduced, further driving up unit costs. The Secretary’s decisions to reduce near-term procurement quantities and adopt a less-steep ramp up in future production were based on IMRT findings. The Secretary chartered the IMRT to comprehensively review JSF manufacturing capacity to assess the contractor’s ability to achieve planned production ramp-up and to sustain the predicted maximum production rates. The IMRT’s October 2009 report made 20 specific recommendations for corrective actions. As of September 2010, officials considered eight of the recommendations complete and three others on track. Implementation of the remaining nine recommendations was incomplete or behind schedule. The most significant incomplete recommendation is improving global supply chain management. The JSF already has an extensive number of suppliers worldwide and those numbers will increase with future workload shared among numerous domestic and foreign firms. The IMRT cites the global supply chain as the critical manufacturing challenge facing the program, requiring significant improvement in delivery performance and responsiveness in order to achieve the program’s eventual production rate goal of 20 aircraft per month. According to the prime contractor, the global supply chain remains on the critical path and progress has been made, but the global transportation plan and supply chain risk management plan are incomplete. Another IMRT recommendation that still needs to be addressed is the performance of a comprehensive schedule risk assessment, now expected to begin in spring 2011. We recommended this in our March 2009 report. Schedule risk assessments can provide keen insight into critical path activities, cost and schedule interrelationships, and emerging risks. The primary F135 engine contractor faces similar challenges as it moves deeper into production. All development engines and initial production units have been delivered, but the costs to complete each of the first three engine production contracts increased and deliveries slipped since contract awards. Officials said these delays have not been especially troublesome to date because aircraft deliveries were even later. The contractor achieved the initial service release for the CTOL and CV engine, meaning the engine configuration is qualified and ready to go into production, but the STOVL’s initial release was delayed until December 2010 due to qualification testing. The JAT reviewed F135 program performance, identified cost drivers, and made affordability projections. JAT officials said the contractor’s cost reduction efforts were credible but largely dependent on receiving more government funding for affordability initiatives and alternative sourcing arrangements. Our past work in best practices found that successful product development programs reach a point at which they know that manufacturing processes will efficiently produce a new product conforming to cost, quality, and schedule targets before they begin producing a system. Reaching this point means more than knowing that the product can be built; it means that critical manufacturing processes are under control, such that the quality, volume, and cost are proven acceptable. By these criteria, the JSF contractors’ abilities to ramp-up to greater rates of production have not yet been demonstrated. The aircraft and engine manufacturers now have significantly more items in production flow compared to prior years, but throughput capacity to complete all work and deliver end items is constrained. We determined that the aircraft and engine contractors are making good faith efforts to implement the recommendations of the IMRT and JAT and to make other improvements to production capacity and flow. The aircraft manufacturer is reporting a decrease in out of station work, more efficient work stations, improved quality, increased parts availability, and reduced span times. Until improvements are fully implemented and demonstrated, the restructuring actions to reduce near term procurement quantities and establish a more achievable ramp rate was appropriate and will provide more time to fully mature manufacturing and supply processes and catch up with aircraft backlogs. Improving factory throughput and controlling costs—driving down unit costs and delivering on time— are essential for efficient manufacturing and timely delivery to the warfighter at the increased production rates planned for the future. STOVL and CTOL aircraft are behind reliability growth plans aimed at demonstrating that the aircraft will meet warfighter support and availability requirements. The carrier variant is in early stages of flight testing and sufficient reliability data was not available. Reliability is a function of the specific elements of a product’s design; a system is reliable when it can perform over a specified period of time without failure, degradation, or need of repair. Improvements over time occur through design changes or manufacturing process improvements. A key reliability metric is mean flying hours between failure, defined as the number of flying hours achieved divided by the number of failures incurred. Reliability growth plans called for the STOVL to have achieved at least 1.9 flying hours between failures and for the CTOL 2.9 flying hours between failures by this point in the test program. However, the STOVL aircraft is significantly behind plans, achieving about 0.4 hours between failures, or about 20 percent of what was expected by this time. The CTOL variant was also behind plans achieving 1.8 hours between failures, approximately 60 percent of what was expected. Figure 4 depicts progress of each variant in demonstrating mean flying hours between failures, as of September 2010. Improving reliability rates are essential to control future operating costs and ensure aircraft are available as needed by the warfighter. Compared to the up-front costs of acquiring aircraft, the long-term costs for operating, maintaining, and sustaining JSF fleets over an aircraft’s useful life represent the much larger portion of total ownership costs. We have reported in the past that it is important to demonstrate that system reliability is on track to meet goals before production begins as changes after production commences can be inefficient and costly. The JSF program is still very early in demonstrating aircraft design and testing to verify it works as intended. As of December 2010, about four percent of JSF capabilities have been completely verified by flight tests, lab results, or both. Initial tests of a fully integrated aircraft to demonstrate full mission systems capabilities and weapons delivery is now not expected until 2015, three years later than planned. The program demonstrated measurable progress in development flight testing during 2010, but still lags earlier expectations, and the STOVL problems have constrained overall progress. Only 3 of 32 ground test labs and simulation models critical to complement and, in some cases, substitute for flight tests, are accredited to verify and ensure the fidelity of results. Software development—essential for achieving about 80 percent of the JSF functionality—is significantly behind schedule as it enters its most challenging phase. Software delivery to the test program that is essential to demonstrating full system capability is now expected in late 2014, a 3-year delay. Our work in best practices suggests that a key indicator of a product’s maturity and readiness for production is when a fully integrated, capable system has been demonstrated to work in its intended environment. A fully integrated, capable system would include the integration of all the hardware, including mission avionics systems, and software needed to provide the system its full mission capabilities. Many past DOD weapons programs have failed to demonstrate that the system works as intended before entering production, discovering costly design problems late in development when the more complex software and advanced capabilities are integrated and tested. Development flight testing was much more active in 2010 than prior years and had some notable successes, but overall still lagged behind expectations. The continuing effects from late delivery of test aircraft and an inability to achieve the planned flying rates per aircraft substantially reduced the amount and pace of testing planned previously. Consequently, even though the flight test program accelerated its pace last year, the total number of flights accomplished during the first four years of the test program significantly lagged expectations when the program’s 2007 baseline was established. Figure 5 shows that the cumulative number of flights accomplished by the end of 2010 was only about one-fifth the number forecast by this time in the 2007 test plan. Program officials reported that 13 test aircraft are now out of production. Ten test aircraft have been ferried to test sites and others are in varying stages of final check-out. The program has accomplished first flights for all three variants. Officials had hoped aircraft could achieve a rate of 12 flights per month. However, the average flight rate for 2010 ranged from over 2 to almost 8 per month. By the end of 2010, about 10 percent of more than fifty thousand planned test points have been completed. According to program officials, completion of a test point means that the test point has been flown and that flight engineers ruled that the point has met the need. Further analysis may be necessary for the test point to be closed out. The majority of the points were earned on airworthiness tests (basic airframe handling characteristics) and in ferrying the planes to test sites. According to a senior level DOD test official, airworthiness and ferry test points should be relatively easy to accomplish. Remaining test points include more complex and stringent requirements, such as mission systems, ship suitability, and weapons integration that have yet to be demonstrated. As discussed earlier, STOVL flight performance lagged plans during 2010, while the CTOL variant exceeded and the CV variant generally met plans. Officials reported that design and manufacturing defects and excessive component failures caused prolonged maintenance periods that drove the low fly rates. For instance, in the July to August 2010 period, STOVL test aircraft were down for unscheduled maintenance more than half the time. Further test delays will likely cause the program to miss critical future milestones. STOVL initial at-sea testing will not start until October 2011 because of delays in clearing the vertical-landing envelope. STOVL-related delays are also causing Marine Corps leadership to reassess its requirements and will likely extend the date for achieving initial operational capabilities, currently set in December 2012. Concerned that STOVL testing problems were negatively affecting the other variants, the Department moved to decouple the STOVL testing and placed the variant on a two-year probation period to work out problems and get back on track. The Secretary’s actions will require a new test plan since current flight test plans rely substantially on the STOVL to fly and demonstrate test points in common with other variants. The current plan has the STOVL responsible for completing about 43 percent of the total test points. JSF restructuring actions are positive and support a more robust and achievable test plan. Officials added more resources for development testing, extended the flight test schedule, and reduced the overlap with initial operational testing. More recently, officials revised the test plan increasing the total number of test flights from 5,856 to 7,727, about one- third more. To increase capacity, the restructure added one carrier variant test aircraft, an additional software integration line, and allowed the program to utilize up to three production aircraft for development testing. Compared to the previous test plan, officials assumed more ground time for aircraft maintenance and planned modifications, as well as a more measured ramp-up in the rate of flights per test aircraft. The restructuring largely reverses the program’s earlier Mid-Course Risk Reduction plan that reduced test resources. Our March 2008 report criticized DOD’s mid- course plan, particularly the cuts made in flight test assets and the number of flight tests, as well as the program’s failure to address root causes of cost growth, the very reasons why officials felt the mid-course plan was needed. Since that report was issued, JSF cost and schedule continued to deteriorate and officials recognized a need to increase test assets and add more flight testing. The JSF test program relies much more heavily than previous weapon systems on its modeling and simulation labs to test and verify aircraft design and subsystem performance. However, only 3 of 32 labs and models have been fully accredited to date; the program had planned to accredit 11 labs and models by now. Accreditation is essential to ensure the fidelity of results validate that the models accurately reflect aircraft performance. Accreditation is a lengthy and involved technical evaluation using flight test data to verify lab results. Much work remains before the program can fully utilize the models and simulation capabilities needed to verify results and to demonstrate that ground testing can substitute for flight testing. However, the ability to substitute is unproven and progress in reducing program risk is difficult to assess. Contracting officials told us that early results are providing good correlation between ground and flight tests. The Director of Operational Test and Evaluation reported that 50 percent of the models will be accredited during the final year of flight testing, a highly risky approach. Delays in accreditation add risks to not completing future software blocks on time and for discovering defects late. More flight testing may be needed to cover lab shortcomings, but is generally more expensive, and could lead to more delays in completing development and operational testing. It could also require more production aircraft for a longer period to supplement test assets, resulting in fewer systems at training sites and operational bases. Contractor utilization of labs has increased markedly and the number and integration of labs is impressive, but capacity may be constrained. Because of development concurrency, there is overlap in scheduling the new blocks and resources must be shared between blocks when rework on an earlier block is required. If integration and test is delayed due to capacity or conflict with an earlier block, lab officials said that expected capabilities may not be delivered on time to meet flight test and training dates. Mitigating strategies include adding people, lab capacity, software test lines, and shifting capabilities to later blocks. The 2010 restructuring added $250 million to increase integration lab capacity. According to program officials, the greater number of labs allows engineers to work simultaneously on different development blocks, reducing bottlenecks that may occur in testing. Program and contractor officials believe that the up-front investment of $5 billion in simulation labs will pay off in early risk reduction, reduce flights, control costs, and are essential to meet key milestones in JSF’s aggressive test plan. Software providing essential JSF capability is not mature and releases to the test program are behind schedule. Officials underestimated the time and effort needed to develop and integrate the software, substantially contributing to the program’s overall cost and schedule problems and testing delays, while requiring the retention of engineers for longer periods. Significant learning and development work remains before the program can demonstrate the mature software capabilities needed to meet warfighter requirements. Good progress has been made in the writing of software code—about three-fourths of the software has been written and integrated, but testing is behind schedule and the most complex work is still ahead. Program restructuring added a second software integration line which should improve throughput. The JSF software development effort is one of the largest and most complex in DOD history, providing 80 percent of JSF’s functionality essential to capabilities such as sensor fusion, weapons and fire control, maintenance diagnostics, and propulsion. JSF has about 8 times more on- board software lines of code than the F/A-18E/F Super Hornet and 4 times more than the F-22A Raptor. Also, the amount of code needed will likely increase as integration and testing efforts intensify. In 2009, officials reported that about 40 percent of the software had completed integration and testing. They did not provide us a progress report through 2010. Integration and test is a lengthy effort and is typically the most challenging phase of software development requiring specialized skills and integration test lines. The program has experienced a growth of 40 percent in total software lines of code since preliminary design review and 13 percent growth since the critical design review. Other recent defense acquisitions have experienced 30 to 100 percent growth in software over time. Software capabilities are developed, tested, and delivered in 5 blocks, or increments. Several blocks have grown in size and taken longer to complete than planned. Software defects, low productivity, and concurrent development of successive blocks created inefficiencies, taking longer to fix defects and delaying the demonstration of critical capabilities. In addition, program and prime contractor officials acknowledge they do not include integration as a key tracking metric and have been unable to agree on how to track it. This has made it hard for the program to analyze integration trends and take action to remedy the situation. Instead the program office and prime contractor have made several adjustments to the software development schedule, each time lengthening the time needed to complete work, as shown in figure 7. Delays in developing, integrating, and releasing software to the test program have cascading effects hampering flight tests, training, and lab accreditation. While progress is being made, a substantial amount of software work remains before the program can demonstrate full warfighting capability. The program released block 0.5 for flight test nearly 2 years later than planned in the 2006 plan, largely due to integration problems. Each of the remaining three blocks—providing full mission systems and warfighting capabilities—are now projected to slip between 2 to 3 years compared to the 2006 plan. Defects and workload bottlenecks delayed the release of full block 1 capabilities; the initial limited release of block 1 software was flown for the first time in November 2010. Software defects increased throughout 2010, but fixing defects did not keep pace. Some capabilities were moved to future blocks in attempts to meet schedule and mitigate risks. For example, full data fusion mission systems were deferred from block 2 to 3. Further trades and deferrals may be needed. Rather than working all blocks concurrently, focusing efforts on a more measured evolutionary approach could result in more timely release of incremental capabilities to the testing, training, and warfighter communities. Development and integration of the most advanced capabilities could be deferred to future increments and delivered to the warfighter at a later date. The recent technical baseline review identified software as a significant challenge, slowing system development and requiring more time and money. Although officials are confident that such risks can be addressed, the scale and complexity of what is involved remains a technically challenging and lengthy effort. Uncertainties pertaining to critical technologies, including the helmet-mounted display and advanced data links, add to challenges. Deficiencies in the helmet mounted display, especially latency in transmitting sensor data, are causing officials to develop a second helmet while trying to fix the first model. Resolution could result in a major redesign or changes in the JSF’s concept of operations by placing limitations on the operational environment, according to program officials. The JSF program is at a critical juncture—9 years in development and 4 years in limited production, but still early in testing and verifying aircraft performance. If effectively implemented and sustained, the Department’s restructuring should place the JSF program on firmer footing and lead to more achievable and predictable outcomes. However, restructuring comes with a price tag—higher up-front development costs, fewer aircraft received in the near term, training delays, and prolonged times for testing and delivering the capabilities required by the warfighter. Reducing near- term procurement quantities lessens concurrency, but the overlap among development, testing, and production activities is still substantial and risky. Development and testing activities will now overlap 11 years of production based on the latest extension in key milestones. Flight testing and production activity are increasing and contractors are improving supply and manufacturing processes, but deliveries are still lagging. The challenge in front of the aircraft and engine contractors is improving the global supply chain and accelerating manufacturing throughput to produce quality products in economic quantities and on time. Slowed deliveries have built a growing backlog of jets on order but not delivered; this is not a good use of federal funds, tying up millions of obligated dollars much ahead of the ability of the manufacturing process to produce. The Secretary of Defense used similar reasoning in significantly reducing STOVL procurement until technical issues are resolved and the manufacturing process able to deliver efficiently and on time. The JSF acquisition demands an unprecedented share of the Department’s future investment funding. The program’s size and priority is such that its cost overruns and extended schedules are either borne by funding cuts to other programs or else drive increases in the top line of defense spending, the latter not an attractive option in a period of more austere budgets. Up until now, JSF problems have been addressed either with more time and money or by deferring aircraft procurement to be borne by future years’ budgets. It is past time to place some boundaries on the program such that future difficulties can be managed within a finite budget by facilitating trades within the JSF program and thereby minimizing impacts on other defense programs and priorities. Also, Department actions to limit STOVL procurement, decouple it from development testing, and concentrate efforts to resolve deficiencies are appropriate. Given its criticality to the Marine Corp’s future tactical aviation plans, additional steps may be needed to set the framework and criteria for the “probation period” and to sustain management focus on STOVL in order to better ascertain its progress and inform future decisions. Focused individual attention on STOVL apart from the other two variants could allow each variant to proceed through development and testing at its own pace. Furthermore, development testing is hampered both by the late delivery of software increments and the lagging schedule for accrediting ground labs and simulation models. A comprehensive independent review of the software development process and lab accreditation issues could enhance management insight and identify opportunities for improvement in these critical areas. We note that the previous independent teams established by the Department significantly improved the manufacturing, engine, and cost estimating processes. We agree with defense leadership that a renewed and sustained focus on affordability by contractors and the Government is critical to moving this important program forward and enabling our military services and our allies to acquire and sustain JSF forces in needed quantities. Maintaining senior leadership’s increased focus on program results, holding government and contractors accountable for improving performance, and bringing a more assertive, aggressive management approach for the JSF to “live within its means” could help effectively manage growth in the program and limit the consequences on other programs in the portfolio. Controlling JSF future cost growth would minimize funding disruption and help stabilize the defense acquisition portfolio by providing more certainty to financial projections and by facilitating the allocation of remaining budget authority to other defense modernization programs. Given the other priorities that DOD must address in a finite budget, a renewed and sustained focus on affordability by contractors and the Government is critical for successfully moving the JSF program forward. DOD must plan ahead for a way to address and manage JSF challenges and risks in the future. To facilitate making tradeoff decisions with respect to the JSF program that limit impacts to other DOD programs and priorities and to improve key management processes, we recommend that the Secretary of Defense take the following actions to reinforce and strengthen program cost controls and oversight: 1. The JSF program should maintain total annual funding levels for development and procurement at the current budgeted amounts in the fiscal year 2012-2016 future years defense plan (modified, if warranted, by the new acquisition program baseline expected this year). It should facilitate trades among cost, schedule, requirements, and quantities to control cost growth. Having gone through the Technical Baseline Review (TBR) and budget approval process, it is reasonable to expect the program to execute against the future years defense plan going forward. Only in instances of major and unforeseen circumstances, should the Department consider spending more money on the program. Even then, we would expect changes to be few and adopted only after close scrutiny by defense leadership. Approved changes should be well supported, adequately documented, and reported to the congressional defense committees. 2. Establish criteria for the STOVL probation period and take additional steps to sustain individual attention on STOVL-specific issues, including independent F-35B/STOVL Progress Reviews with Senior Leadership to ensure cost and schedule milestones are achieved to deliver required warfighter capabilities. The intent is to allow each JSF variant to proceed and demonstrate success at its own pace and could result in separate full-rate production decisions. 3. The Department should conduct an independent review of the contractor’s software development, integration, and test processes— similar to its review of manufacturing operations—and look for opportunities to streamline software efforts. This review should include an evaluation of the ground lab and simulation model accreditation process to ensure it is properly structured and robustly resourced to support software test and verification requirements. DOD provided us with written comments on a draft of this report. The comments are reprinted in appendix III. We worked collaboratively with defense officials to hone our draft recommendations, making them more targeted. DOD concurred with the recommendations as amended. We also incorporated technical comments as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force and Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix IV. To determine the Joint Strike Fighter (JSF) program’s progress in meeting cost, schedule, and performance goals, we received briefings by program and contractor officials and reviewed financial management reports, budget documents, annual Selected Acquisition Reports, monthly status reports, performance indicators, and other data. We identified changes in cost and schedule, and obtained officials’ reasons for these changes. We interviewed officials from the JSF program, contractors, and the Department of Defense (DOD) to obtain their views on progress, ongoing concerns and actions taken to address them, and future plans to complete JSF development and accelerate procurement. At the time of our review, the most recent Selected Acquisition Report available was dated December 31, 2009 and released in April 2010. At the time of our review, DOD was preparing a new acquisition program baseline for the program which would reflect updated cost and schedule projections. In assessing program cost estimates, we compared the official program cost estimate in the 2009 Selected Acquisition Report and subsequent cost estimate developed after the Nunn-McCurdy breach to estimates developed by the JSF program and Defense Contract Management Agency (DCMA) reports. We interviewed program office officials and members of the DOD Cost Analysis and Program Evaluation Office (CAPE), and DCMA to understand their methodology, data, and approach in developing cost estimates. To assess the validity and reliability of contractors’ cost estimates, we reviewed audit reports prepared by DCMA and cost performance reports prepared by the contractor. To access the program’s plans and risk in manufacturing and its capacity to accelerate production, we analyzed manufacturing cost and work performance data to assess progress against plans. We compared budgeted program labor hours to actual labor hours and identified growth trends. We reviewed data and briefings provided by the program, DCMA, and CAPE to assess supplier performance and ability to support accelerated production in the near term. We also determined reasons for manufacturing delays, discussed program and contractor plans to improve, and projected the impact on development and operational tests. We interviewed Naval Air Systems Command and contractor officials to discuss Earned Value Management System issues but we did not conduct any analysis since the data was deemed unreliable by DCMA. To assess plans, progress, and risks in test activities, we examined program documents and interviewed DOD, program office, and contractor officials about current test plans and progress. To assess progress towards test plans, we compared the number of flight tests conducted as of December 2010 to the original test plan established in 2007. We also reviewed documents and interviewed prime contractors about flight testing, the integrated airborne test bed, and ground testing. To assess the ground labs and test bed, we interviewed officials and toured the testing labs at the Lockheed Martin facilities in Fort Worth, Texas. We also reviewed the independent assessments conducted by the JET and NAVAIR to obtain their perspective on the program’s progress in test activities. In performing our work, we obtained information and interviewed officials from the JSF Joint Program office, Arlington, Virginia; Naval Air Systems Command, Patuxent River, Maryland; Defense Contract Management Agency, Fort Worth, Texas; Lockheed Martin Aeronautics, Fort Worth, Texas; Defense Contract Management Agency, Middletown, Connecticut; and Pratt & Whitney, Middletown, Connecticut. We also met with and obtained data from the following offices of the Secretary of Defense in Washington, D.C.: Director, Operational Test and Evaluation; Cost Analysis and Program Evaluation Office; and Systems and Software Engineering. We assessed the reliability of DOD and JSF contractor data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from May 2010 to February 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Critical technologies needed for key aircraft performance elements are not mature. Program should delay start of system development until critical technologies are mature to acceptable levels. DOD did not delay start of system development and demonstration stating technologies were at acceptable maturity levels and will manage risks in development. The program undergoes re- plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommend that the program reduce risks and establish executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but does not adjust strategy, believing that their approach is balanced between cost, schedule and technical risk. Program sets in motion plan to enter production in 2007 shortly after first flight of the non- production representative aircraft. The program plans to enter production with less than 1 percent of testing complete. We recommend program delay investing in production until flight testing shows that JSF performs as expected. DOD partially concurred but did not delay start of production because they believe the risk level was appropriate. Congress reduced funding for first two low-rate production buys thereby slowing the ramp up of production. Progress is being made but concerns remain about undue overlap in testing and production. We recommend limits to annual production quantities to 24 a year until flying quantities are demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We believe new plan actually increases risks and that DOD should revise the plan to address concerns about testing, use of management reserves, and manufacturing. We determine that the cost estimate is not reliable and that a new cost estimate and schedule risk assessment is needed. DOD did not revise risk plan nor restore testing resources, stating that they will monitor the new plan and adjust it if necessary. Consistent with a report recommendation, a new cost estimate was eventually prepared, but DOD refused to do a risk and uncertainty analysis that we felt was important to provide a range estimate of potential outcomes. The program increased the cost estimate and adds a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Because of development problems, we stated that moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD agreed to report its contracting strategy and plans to Congress. In response to our report recommendation, DOD subsequently agreed to do a schedule risk analysis, but still had not done so as of February 2011. In February 2010, the Department announced a major restructuring of the JSF program, including reduced procurement and a planned move to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Because of additional costs and schedule delays, the program’s ability to meet warfighter requirements on time is at risk. We recommend the program complete a full comprehensive cost estimate and assess warfighter and IOC requirements. We suggest that Congress require DOD to prepare a “system maturity matrix”–a tool for tying annual procurement requests to demonstrated progress. DOD continued restructuring actions and announced plans to increase test resources and lower the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. As we projected in this report, cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. The Department and Congress are working on a “system maturity matrix” tool, which we suggested to Congress for consideration, to improve oversight and inform budget deliberations. Average procurement unit cost. In addition to the contact name above, the following staff members made key contributions to this report: Bruce Fairbairn, Assistant Director; Charlie Shivers; Julie Hadley; Matt Lea; Jason Lee; Sean Merrill; LeAnna Parkey; Karen Richey; Dr. W. Kendal Roberts; and Robert Swierczek. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington D.C.: December 16, 2010. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Tactical Aircraft: DOD’s Ability to Meet Future Requirements is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Joint Strike Fighter: Significant Challenges and Decisions Ahead. GAO-10-478T. Washington, D.C.: March 24, 2010. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Significant Challenges Remain as DOD Restructures Program. GAO-10-520T. Washington, D.C.: March 11, 2010. Joint Strike Fighter: Strong Risk Management Essential as Program Enters Most Challenging Phase. GAO-09-711T. Washington, D.C.: May 20, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government's Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005.
|
The F-35 Lightning II, also known as the Joint Strike Fighter (JSF), is the Department of Defense's (DOD) most costly and ambitious aircraft acquisition, seeking to simultaneously develop and field three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The JSF is critical for recapitalizing tactical air forces and will require a long-term commitment to very large annual funding outlays. The current estimated investment is $382 billion to develop and procure 2,457 aircraft. This report, prepared in response to a congressional mandate in the National Defense Authorization Act for Fiscal Year 2010, discusses (1) program cost and schedule changes and their implications on affordability; (2) progress made during 2010; (3) design and manufacturing maturity; and (4) test plans and progress. GAO's work included analyses of a wide range of program documents and interviews with defense and contractor officials. DOD continues to substantially restructure the JSF program, taking positive actions that should lead to more achievable and predictable outcomes. Restructuring has consequences--higher up-front development costs, fewer aircraft in the near term, training delays, and extended times for testing and delivering capabilities to warfighters. Total development funding is now $56.4 billion to complete in 2018, a 26 percent increase in cost and a 5-year slip in schedule compared to the current baseline. DOD also reduced procurement quantities by 246 aircraft through 2016, but has not calculated the net effects of restructuring on total procurement costs nor approved a new baseline. Affordability for the U.S. and partners is challenged by a near doubling in average unit prices since program start and higher estimated life-cycle costs. Going forward, the JSF requires unprecedented funding levels in a period of more austere defense budgets. The program had mixed success in 2010, achieving 6 of 12 major goals it established and making varying degrees of progress on the others. Successes included the first flight of the carrier variant, award of a fixed-price aircraft procurement contract, and an accelerated pace in development flight tests that accomplished three times as many flights in 2010 as the previous 3 years combined. However, the program did not deliver as many aircraft to test and training sites as planned and made only a partial release of software capabilities. The short take off and landing variant (STOVL) experienced significant technical problems and did not meet flight test expectations. The Secretary of Defense directed a 2-year period to evaluate and engineer STOVL solutions. After more than 9 years in development and 4 in production, the JSF program has not fully demonstrated that the aircraft design is stable, manufacturing processes are mature, and the system is reliable. Engineering drawings are still being released to the manufacturing floor and design changes continue at higher rates than desired. More changes are expected as testing accelerates. Test and production aircraft cost more and are taking longer to deliver than expected. Manufacturers are improving operations and implemented 8 of 20 recommendations from an expert panel, but have not yet demonstrated a capacity to efficiently produce at higher production rates. Substantial improvements in factory throughput and the global supply chain are needed. Development testing is still early in demonstrating that aircraft will work as intended and meet warfighter requirements. Only about 4 percent of JSF capabilities have been completely verified by flight tests, lab results, or both. Only 3 of the extensive network of 32 ground test labs and simulation models are fully accredited to ensure the fidelity of results. Software development--essential for achieving about 80 percent of the JSF functionality--is significantly behind schedule as it enters its most challenging phase. To sustain a focus on accountability and facilitate tradeoffs within the JSF program, GAO recommends that DOD (1) maintain annual funding levels at current budgeted amounts; (2) establish criteria for evaluating the STOVL's progress and make independent reviews, allowing each variant to proceed at its own pace; and (3) conduct an independent review of the software development and lab accreditation processes. DOD concurred.
|
Energy audits typically identify information on projects that could address the consumption of fossil fuel and electricity as well as projects that could reduce emissions from other sources, such as leaks in refrigeration equipment. Energy audits also include information on the cost- effectiveness of projects and on the extent to which the projects could reduce emissions. This information can then be used to evaluate and select projects. The audits generally fall into three categories— preliminary, targeted, and comprehensive—and are distinguished by the level of detail and analysis required. Preliminary audits are the least detailed and provide quick evaluations to determine a project’s potential. These audits typically do not provide sufficiently detailed information to justify investments but may prove useful in identifying opportunities for more detailed evaluations. Targeted audits are detailed analyses of specific systems, such as lighting. Comprehensive audits are detailed analyses of all major energy-using systems. Both targeted and comprehensive audits provide sufficiently detailed information to justify investing in projects. Energy-saving projects that fall outside the scope of energy audits include efforts to enhance outreach and education efforts to curtail energy use by building occupants and purchasing high-efficiency appliances. Outreach and education efforts include providing information on how employees can conserve energy, such as AOC’s “how-to guides” that detail cost- effective methods to save energy in the workplace. Efforts to curtail energy use include purchasing energy-efficient computer equipment and appliances, using information available from the Environmental Protection Agency’s Energy Star program or the Federal Energy Management Program (FEMP). Energy Star-qualified and FEMP-designated products meet energy-efficiency guidelines set by the Environmental Protection Agency and the Department of Energy and, in general, represent the top 30 percent most energy-efficient products in their class of products. These products cover a wide range of categories, including appliances and office equipment. According to the Energy Star program, office products that have earned the Energy Star rating use about half as much electricity as standard equipment and generally cost the same as equipment that is not Energy Star qualified. AOC has made some progress toward implementing the two recommendations in our April 2007 report. First, AOC has taken steps to address our recommendation that it develop a schedule for routinely conducting energy audits by developing a prioritized list of buildings for which it plans to conduct comprehensive energy audits (see App. 1). Specifically, AOC is currently undertaking a comprehensive energy audit of the U.S. Capitol Police Buildings and Grounds and obtained a draft submission in May 2008 from the private contractor performing the audit. AOC also plans to use $400,000 of fiscal year 2008 funds to perform comprehensive energy audits of the Capitol Building and the Ford House Office Building, and says it will direct any remaining fiscal year 2008 funds to an audit of the Hart Senate Office Building. Additionally, AOC has contracted with a private firm to conduct a preliminary energy audit of the Senate Office Buildings that could prove useful in identifying opportunities for more comprehensive and targeted evaluations. AOC officials said that they developed the prioritized list of buildings to audit by comparing the amount of energy used per square foot of space in each building (referred to as energy intensity) and then placing the buildings that use relatively higher levels of energy at the top of the list. However, AOC’s prioritized list does not provide information on the energy intensity of each building, an explanation of its prioritization scheme, or cost estimates. Furthermore, AOC has not developed a schedule for routinely conducting audits as we recommended in our April 2007 report. AOC officials said that they cannot complete a more comprehensive schedule because of uncertainty about the extent to which AOC will receive future appropriations to conduct the audits. We believe that developing a more detailed schedule for future audits along with an explanation of its prioritization scheme and cost estimates would assist the Congress in its appropriations decisions and facilitate the completion of additional audits. Second, AOC can do more to fully address the second recommendation in our April 2007 report that it implement selected projects as part of an overall plan to reduce emissions that considers cost-effectiveness, the extent to which the projects reduce emissions, and funding options. In recent years, AOC has undertaken numerous projects throughout the Capitol Hill Complex to reduce energy use and related emissions, but these projects were not identified through the process we recommended. Projects completed or underway include upgrading lighting systems, conducting education and outreach, purchasing energy-efficient equipment and appliances, and installing new windows in the Ford building. Examples of projects in Senate office buildings include upgrading the lighting in 11 offices with daylight and occupancy sensors, installing energy efficiency ceiling tiles in the Hart building, and replacing steam system components. According to AOC, these efforts have already decreased the energy intensity throughout the Capitol Hill Complex. Specifically, AOC said that it decreased its energy intensity—the amount of energy used per square foot of space within a facility—by 6.5 percent in fiscal year 2006 and 6.7 percent in fiscal year 2007. As AOC moves forward with identifying and selecting projects that could decrease energy use and related emissions, it could further respond to our recommendation by developing a plan that identifies the potential benefits and costs of each option based on the results of energy audits. Such a plan could build on AOC’s existing Sustainability Framework Plan and its Comprehensive Emissions Reduction Plan for the Capitol Complex, which identify measures that could lead to improvements in energy efficiency and reductions in greenhouse gas emissions. Complementing these plans with information on projects identified through energy audits would further assist AOC in using the resources devoted to energy efficiency enhancements as effectively as possible. The Senate has three primary options for decreasing greenhouse gas emissions and related environmental impacts associated with its operations. These include (1) implementing additional projects to decrease the demand for electricity and steam derived from fossil fuel; (2) adjusting the Capitol Power Plant’s fuel mix to rely more heavily on natural gas, which produces smaller quantities of greenhouse gas emissions for each unit of energy input than the coal and oil also burned in the plant; and (3) purchasing renewable electricity or carbon offsets from external providers. Each option involves economic and environmental tradeoffs and the first option is likely to be the most cost-effective because the projects could lead to recurring cost savings through reductions in energy expenditures. Regarding the first option, as we reported in April 2007, conducting energy audits would assist AOC in addressing the largest sources of emissions because the audits would help identify cost-effective energy-efficiency projects. In general, energy projects are deemed cost-effective if it is determined through an energy audit that they will generate sufficient savings to pay for their capital costs. These projects may require up-front capital investments that the Senate could finance through direct appropriations or contracts with utility or energy service companies, under which the company initially pays for the work and the Senate later repays the company with the resulting savings. Until AOC exhausts its opportunities for identifying energy-efficiency projects that will pay for themselves over a reasonable time horizon, this option is likely to be more cost-effective than the second two options, both of which would involve recurring expenditures. In pursuing energy audits, AOC faces a significant challenge collecting reliable data on the baseline level of energy use within each Senate office building. Such data would help identify inefficient systems and provide a baseline against which AOC could measure potential or actual energy- efficiency improvements. First, while AOC has meters that track electricity use in each building, the meters that track the steam and chilled water used by each building no longer work or provide unreliable data. AOC officials said that they have purchased but not installed new meters to track the use of chilled water and are evaluating options for acquiring new steam meters. Installing these meters and collecting reliable data would enhance any efforts to identify potential energy saving measures. Second, AOC does not have submeters within each building to track the electricity use within different sections of each building. Such submetering would further assist in targeting aspects of the Senate buildings’ operations that consume relatively high quantities of energy. AOC said that it plans to install submeters for electricity, chilled water, and steam by February 2009. A second option for decreasing greenhouse gas emissions would involve directing AOC to further adjust the fuel mix at the Capitol Power Plant to rely more heavily on the combustion of natural gas in generating steam for space heating. The plant currently produces steam using a combination of seven boilers—two that primarily burn coal, but could also burn natural gas, and five boilers that burn fuel oil or natural gas. The total capacity of these boilers is over 40 percent higher than the maximum capacity required at any given time, and the plant has the flexibility to switch among the three fuels or burn a combination of fuels. The percentage of energy input from each fuel has varied from year to year, with an average fuel mix of 43 percent natural gas, 47 percent coal, and 10 percent fuel oil between 2001 and 2007. In June 2007, the Chief Administrative Officer of the House of Representatives released the Green the Capitol Initiative, which directed AOC to operate the plant with natural gas instead of coal to meet the needs of the House. The House Appropriations Committee subsequently directed GAO to determine the expected increase in natural gas use for House operations and the associated costs at the power plant that would result from the initiative. In May 2008, we reported that the fuel-switching directive should lead to a 38 percent increase in natural gas use over the average annual quantity consumed between 2001 and 2007. We also estimated that the fuel switching should cost about $1.4 million in fiscal year 2008 and could range from between $1.0 and $1.8 million depending on actual fuel costs, among other factors. We also estimated that the costs would range from between $4.7 million and $8.3 million over the 2008 through 2012 period, depending on fuel prices, the plant’s output, and other factors. While we have not analyzed the potential costs of fuel switching at the Capitol Power Plant to meet the needs of the Senate, our estimates for the House may provide some indication of the potential costs of such a directive. Additionally, AOC officials said that further directives to increase its reliance on natural gas in the plant could require equipment upgrades and related capital expenditures. Our May 2008 report also found that decreasing the plant’s reliance on coal could decrease greenhouse gas emissions by about 9,970 metric tons per year at an average cost of $139 per ton and could yield other environmental and health benefits by decreasing emissions of nitrogen oxides, particulate matter, and pollutants that cause acid rain. While fuel switching could decrease emissions of carbon dioxide and other harmful substances, it would also impose recurring costs because natural gas costs about four times as much as coal for an equal amount of energy input. Thus, fuel switching may prove less cost-effective than decreasing the demand for energy. Finally, a third option for the Senate to decrease greenhouse gas emissions and related environmental impacts includes purchasing electricity that is derived from renewable sources and paying external parties for carbon offsets. Neither of these activities would involve modifications to the Capitol Complex or its operations but could nonetheless lead to offsite reductions in emissions and related environmental impacts. Both options, if sustained, would result in recurring costs that should be considered in the context of other options for decreasing emissions that may prove more cost-effective. Madam Chairman, this completes my prepared statement. I would be pleased to answer any questions that you or Members of the Committee may have. For further information about this testimony, please contact Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Daniel Cain, Janice Ceperich, Elizabeth R. Eisenstadt, Michael Hix, Frank Rusco, and Sara Vermillion. Senate Employees Child Care Center CPP Storage (Butler) Building East and West Underground Garages Capitol Police Courier Acceptance Site BG Production Facility (Greenhouse) Construction Management Division Supreme Court Building This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In April 2007, GAO reported that 96 percent of the greenhouse gas emissions from the Capitol Hill Complex facilities--managed by the Architect of the Capitol (AOC)--resulted from electricity use throughout the complex and combustion of fossil fuels in the Capitol Power Plant. The report concluded that AOC and other legislative branch agencies could benefit from conducting energy audits to identify projects that would reduce greenhouse gas emissions. GAO also recommended that AOC and the other agencies establish a schedule for conducting these audits and implement selected projects as part of an overall plan that considers cost-effectiveness, the extent to which the projects reduce emissions, and funding options. AOC and the other agencies agreed with our recommendations. This statement focuses on (1) the status of AOC's efforts to implement the recommendations in our April 2007 report and (2) opportunities for the Senate to decrease greenhouse gas emissions and associated environmental impacts. The statement is based on GAO's prior work, analysis of AOC documents, and discussions with AOC management. AOC has made some progress toward implementing the recommendations in GAO's April 2007 report, but opportunities remain. For example, AOC has prioritized a list of Capitol Hill buildings that need energy audits but has not developed a schedule for conducting the audits that explains the prioritization scheme or provides information on the anticipated costs. AOC prioritized the order of energy audits based on each building's energy use and has begun conducting the first of the audits. In addition, AOC has contracted with a private firm to conduct preliminary audits of the Senate office buildings that could lead to more targeted audits and eventually identify cost-effective projects that would decrease energy use and related greenhouse gas emissions. We believe that developing a more detailed schedule for future audits that includes an explanation of the prioritization scheme and cost estimates would assist the Congress in its appropriations decisions and facilitate the completion of additional audits. With respect to our recommendation that AOC implement selected projects as part of an overall plan to reduce emissions, AOC has implemented projects to reduce energy use and related emissions, but the projects were not identified through the processes we recommended. AOC could more fully respond to our recommendation by first completing the energy audits and then evaluating the cost-effectiveness and relative merits of projects that could further decrease the demand for energy. The Senate's options for decreasing the greenhouse gas emissions and related environmental impacts associated with its operations fall into three main categories--implementing projects to decrease the demand for electricity and steam derived from fossil fuels, adjusting the Capitol Power Plant's fuel mix, and purchasing carbon offsets or renewable electricity from external providers. Of these options, efforts to decrease the demand for energy could lead to recurring cost savings through reductions in energy expenditures while the other options may prove less cost-effective and involve recurring expenses. However, a key challenge in identifying energy-saving opportunities results from limited data on the baseline level of energy use within each Senate building. Specifically, the meters for steam and chilled water no longer function or do not provide reliable data. In addition, the buildings are not equipped with submeters for electricity that, if installed, could enhance efforts to identify sections of the buildings that consume relatively high levels of energy. AOC has purchased but not installed new chilled water meters, is evaluating options for acquiring new steam meters, and plans to install submeters by February 2009.
|
Energy oversees a nationwide network of 40 contractor-operated industrial sites and research laboratories that have historically employed more than 600,000 workers in the production and testing of nuclear weapons. In implementing EEOICPA, the President acknowledged that it had been Energy’s past policy to encourage and assist its contractors in opposing workers’ claims for state workers’ compensation benefits based on illnesses said to be caused by exposure to toxic substances at Energy facilities. Under the new law, workers, or their survivors, could apply for assistance from Energy in pursuing state workers’ compensation benefits, and if they received a positive determination from Energy, the agency would direct its contractors to not contest the workers’ compensation claims or awards. Energy’s rules to implement the new program became effective in September 2002, and the agency began to process the applications it had been accepting since July 2001, when the law took effect. Energy’s claims process has several steps. First, claimants file applications and provide all available medical evidence. Energy then develops the claims by requesting records of employment, medical treatment, and exposure to toxic substances from the Energy facilities where the workers were employed. If Energy determines that the worker was not employed by one of its facilities or did not have an illness that could be caused by exposure to toxic substances, the agency finds the claimant ineligible. For all others, once development is complete, a panel of three physicians reviews the case and decides whether exposure to a toxic substance during employment at an Energy facility was at least as likely as not to have caused, contributed to, or aggravated the claimed medical condition. The panel physicians are appointed by the National Institute for Occupational Safety and Health (NIOSH) but paid by Energy for this work. Claimants receiving positive determinations are advised that they may wish to file claims for state workers’ compensation benefits. Claimants found ineligible or receiving negative determinations may appeal to Energy’s Office of Hearings and Appeals. Each of the 50 states and the District of Columbia has its own workers’ compensation program to provide benefits to workers who are injured on the job or contract a work-related illness. Benefits include medical treatment and cash payments that partially replace lost wages. Collectively, these state programs paid more than $46 billion in cash and medical benefits in 2001. In general, employers finance workers’ compensation programs. Depending on state law, employers finance these programs through one of three methods: (1) they pay insurance premiums to a private insurance carrier, (2) they contribute to a state workers’ compensation fund, or (3) they set funds aside for this purpose as self- insurance. Although state workers’ compensation laws were enacted in part as an attempt to avoid litigation over workplace accidents, the workers’ compensation process is still generally adversarial, with employers and their insurers tending to contest aspects of claims that they consider not valid. State workers’ compensation programs vary as to the level of benefits, length of payments, and time limits for filing. For example, in 1999, the maximum weekly benefit for a total disability in New Mexico was less than $400, while in Iowa it was approximately $950. In addition, in Idaho, the weekly benefit for total disability would be reduced after 52 weeks, while in Iowa benefits would continue at the original rate for the duration of the disability. Further, in Tennessee, a claim must be filed within 1 year of the beginning of incapacity or death. In contrast, in Kentucky, a claim must be filed within 3 years of either the last exposure to most substances or the onset of disease symptoms, but within 20 years of exposure to radiation or asbestos. EEOICPA allows Energy, to the extent permitted by law, to direct its contractors to not contest the workers’ compensation claims filed by Subtitle D claimant who received a positive determination from a physician panel. In addition, the statute prohibits the inclusion of the costs of contesting such claims as allowable costs under its contracts with the contractors; however, Energy’s regulations allow the costs incurred as the result of a workers’ compensation award to be reimbursed in the manner permitted under the contracts. The Subtitle D program does not affect the normal operation of state workers’ compensation programs other than limiting the ability of Energy or its contractors to contest certain claims; Energy does not have authority to expand or contract the scope of any of these state programs. Thus, actions taken by Energy or its contractors will not make a worker eligible for compensation under a state workers’ compensation system if the worker is not otherwise eligible. As of December 31, 2003, Energy had completely processed about 6 percent of the more than 23,000 cases that had been filed, and the majority of all cases filed were associated with facilities in 9 states. Energy had begun processing on nearly 35 percent of cases, but processing had not begun on nearly 60 percent of the cases. The assessment of Energy’s achievement of case processing goals is complicated by systems limitations. Further, these limitations make it difficult to assess achievement of goals related to program objectives, such as the quality of the assistance given to claimants in filing for state workers’ compensation. During the first 2½ years of the program, ending December 31 2003, Energy had fully processed about 6 percent of the more than 23,000 cases it received. The majority of these fully processed cases had been found ineligible because of either a lack of employment at an eligible facility or an illness related to toxic exposure. Of the cases that had been fully processed, 150 cases—less than 1 percent of the more than 23,000 cases filed—had received a final determination from a physician panel. More than half of these determinations (87 cases) were positive. As of the end of calendar year 2003, Energy had not yet begun processing nearly 60 percent of the cases, and an additional 35 percent of cases were in various stages of processing. As shown in figure 2, the majority of the cases being processed were in the case development stage, where Energy requests information from the facility at which the claimant was employed. About 2 percent of the cases in process were ready for physician panel review, and an additional 3 percent were undergoing panel review. A majority of all cases were filed early during program implementation, but new cases continue to be filed. More than half of all cases were filed within the first year of the program, between July 2001 and June 2002. However, between July 2002 and December 31, 2003, Energy continued to receive an average of more than 500 cases per month. Energy officials report that they continue to receive approximately 100 new cases per week. While cases filed are associated with facilities in 43 states or territories, the majority of cases are associated with Energy facilities in 9 states, as shown in figure 3. Facilities in Colorado, Idaho, Iowa, Kentucky, New Mexico, Ohio, South Carolina, Tennessee, and Washington account for more than 75 percent of cases received by December 31, 2003. The largest group of cases is associated with facilities in Tennessee. Workers filed the majority of cases, and cancer is the most frequently reported illness. Workers filed more than 60 percent of cases, and survivors of deceased workers filed about 36 percent of cases. In 2 percent of the cases, a worker filed a claim that was subsequently taken up by a survivor. Cancer is the illness reported in nearly 60 percent of the cases. Diseases affecting the lungs accounted for an additional 15 percent of the cases. Specifically, chronic beryllium disease and/or beryllium sensitivity were reported in 7 percent of the cases, 8 percent reported asbestosis, and less than 1 percent claimed chronic silicosis. Insufficient strategic planning regarding system design, data collection, and tracking of outcomes has made it more difficult for Energy officials to manage some aspects of the program and for those with oversight responsibilities to determine whether Energy is meeting goals for processing claims. The data system Energy uses to aid in case management was developed by contractors without detailed specifications from Energy. Furthermore, the system was developed before Energy established its processing goals and did not collect sufficient information to track Energy’s progress in meeting these goals. While recent changes to the system have improved Energy’s ability to track certain information, these changes have resulted in some recent status data being not completely comparable with older status data. In addition, Energy will be unable to completely track the timeliness of its processing for approximately one-third of the cases that were being processed as of December 2003 because key data are not complete. For example, Energy established a goal of completing case development within 120 days of case assignment to a case manager. At least 70 percent of the cases for which case development was complete were missing dates corresponding to either the beginning or the end of the case development process—data that would allow Energy officials to compute the time elapsed during case development. Energy has not been sufficiently strategic in identifying and systematically collecting certain data that are useful for program management. For instance, Energy does not track the reasons why particular cases were found ineligible in a format that can be easily analyzed. Systematic tracking of the reasons for ineligibility would make it possible to quickly identify cases affected by policy changes. For example, when a facility in West Virginia was determined to be only a Department of Energy facility and not also an atomic weapons employer, it was necessary for Energy to identify which cases had been ruled ineligible because of employment at the West Virginia facility. While some ineligibility information may be stored in case narratives, this information is not available in a format that would allow the agency to quickly identify cases declared ineligible for similar reasons. Ascertaining the reason for ineligibility would at best require review of individual case narratives, and indeed, Energy officials report that it is sometimes necessary to refer back to application forms to find the reasons. As a result, if additional changes are made that change eligibility criteria, Energy may have to expend considerable time and resources determining which cases are affected by the change in policy. In addition, because it did not adequately plan for the various uses of its data, Energy lacks some of the data needed to analyze how cases will fare when they enter the state workers’ compensation systems. Specifically, it is difficult for Energy to predict whether willing payers of workers’ compensation benefits will exist using case management system data because the information about the specific employer for whom the claimant worked is not collected in a format that can be systematically analyzed. In addition, basic demographic data such as the age of employees is not necessarily accurate due to insufficient edit controls— for example, error checking that would prevent employees’ dates of birth from being entered if the date was in the future or recent past. Reliable age data would allow Energy to estimate the proportion of workers who are likely to have health insurance such as Medicare. Insufficient tracking of program outcomes hampers Energy’s ability to determine how well it is providing assistance to claimants in filing claims for state workers’ compensation benefits. Energy has not so far systematically tracked whether claimants subsequently file workers’ compensation claims or the decisions on these claims. However, agency officials recently indicated that they now plan to develop this capability. In addition, Energy does not systematically track whether claimants who receive positive physician panel determinations file workers’ compensation claims, nor whether claims that are filed are approved, or paid. Furthermore, unless Energy’s Office of Hearings and Appeals grants an appeal of a negative determination, which is returned to Energy for further processing, Energy does not track whether a claimant files an appeal. Lack of information about the number of appeals and their outcomes may limit Energy’s ability to assess the quality and consistency of its decision making. Energy was slow in implementing its initial case processing operation, but it is now processing enough cases so that there is a backlog of cases awaiting physician panel review. With panels operating at full capacity, the small pool of physicians qualified to serve on the panels may ultimately limit the agency’s ability to produce more timely determinations. Claimants have experienced lengthy delays in receiving the determinations they need to file workers’ compensation claims and have received little information about claims status as well as what they can expect from this process. Energy has taken some steps intended to reduce the backlog of cases. Energy’s case development process has not always produced enough cases to ensure that the physician panels were functioning at full capacity, but the agency is now processing enough cases to produce a backlog of cases waiting for panel review. Energy officials established a goal of completing the development of 100 cases per week by August 2003 to keep the panels fully engaged. However, the agency did not achieve this goal until several months later. Energy was slow to implement its case development operation. Initially, agency officials did not have a plan to hire a specific number of employees for case development, but they expected to secure additional staff as they were needed. When Energy first began developing cases, in the fall of 2002, the case development process had about 8 case managers. With modest staffing increases, the program quickly outgrew the office space used for this function. Though Energy officials acknowledged the need for more personnel by spring 2003, they delayed hiring until additional space could be secured in August. By November 2003, Energy had more than tripled the number of case managers developing cases, and since that month the agency has continued to process an average of more than 100 cases per week to have them ready for physician panel review. Energy transferred nearly $10 million in fiscal year 2003 funds into this program from other Energy accounts. Further, after completing a comprehensive review of its Subtitle D program, the agency developed a plan that identifies strategies for further accelerating its case processing. This plan sets a goal of eliminating the entire case backlog by the end of calendar year 2006 and depends in part on shifting an additional $33 million into the program in fiscal year 2004, to quadruple the case- processing operation. With additional resources, Energy plans to complete the development of all pending cases as quickly as possible and have them ready for the physician panels. However, this could create a larger backlog of cases awaiting review by physician panels. Because a majority of the claims filed so far are from workers whose medical conditions are likely to change over time, building this backlog could further slow the decision process by making it necessary to update medical records before panel review. Even though additional resources have allowed Energy to speed initial case development, the limited pool of qualified physicians for panels may limit Energy’s capacity to decide cases more quickly. Under the rules Energy originally established for this program that required that each case be reviewed by a panel of 3 physicians and given the 130 physicians currently available, it could have taken more than 13 years to process all cases pending as of December 31, without consideration of the hundreds of new cases the agency is receiving each month. However, in an effort to make the panel process more efficient, Energy published new rules on March 24, 2004, that re-defined a physician panel as one or more physicians appointed to evaluate these cases and changed the timeframes for completing their review. Under the new rule, a panel composed of a single physician will initially review each case, and if a positive determination is issued, no further review is necessary. Negative determinations made by a single physician panels will require review by one or more additional single-physician panels. In addition to revising its rules, the agency began holding a full-time physician panel in Washington, D.C., in January 2004, staffed by physicians who are willing to serve full- time for a 2- or 3-week period. Energy and NIOSH officials have taken steps to expand the number of physicians who would qualify to serve on the panels and to recruit more physicians, including some willing to work full-time. While Energy has made several requests that NIOSH appoint additional physicians to staff the panels, such as requesting 500 physicians in June 2003, NIOSH officials have indicated that the pool of physicians with the appropriate credentials and experience is limited. The criteria NIOSH originally used to evaluate qualifications for appointing physicians to these panels included: (1) board certification in a primary discipline; (2) knowledge of occupational medicine; (3) minimum of 5 years of relevant clinical practice following residency; and (4) reputation for good medical judgment, impartiality, and efficiency. NIOSH recently modified these qualifications, primarily to reduce the amount of required clinical experience so that physicians with experience in relevant clinical or public health practice or research, academic, consulting, or private sector work can now qualify to serve on the panels. NIOSH has revised its recruiting materials to reflect this change and to point out that Energy is also interested in physicians willing to serve on panels full-time. However, a NIOSH official said that he was uncertain about the effect of the change in qualifications on the number of available physicians. In addition, the official indicated that only a handful of physicians would likely be interested in serving full-time on the panels. Energy officials have also explored additional sources from which NIOSH might recruit qualified physicians, but they have expressed concerns that the current statutory cap on the rate of pay for panel physicians may limit the willingness of physicians from these sources to serve on the panels. For example, Energy officials have suggested that physicians in the military services might be used on a part-time basis, but the rate of pay for their military work exceeds the current cap. Similarly, physicians from the Public Health Service could serve on temporary full-time details as panel physicians. To elevate the rate of pay for panel physicians to a level that is consistent with the rate physicians from these sources normally receive, Energy officials recently submitted to the Congress a legislative proposal to eliminate the current cap on the rate of pay and also expand Energy’s hiring authority. Panel physicians have also suggested methods to Energy for improving the efficiency of the panels. For example, some physicians have said that more complete profiles of the types and locations of specific toxic substances at each facility would speed their ability to decide cases. While Energy officials reported that they have completed facility overviews for most of the major sites, specific site reference data are available for only a few sites. Energy officials told us that, in their view, the available information is sufficient for decision making by the panels. However, based on feedback from the physicians, Energy officials are exploring whether developing additional site information would be cost beneficial. Energy has not always provided claimants with complete and timely information about what they could achieve in filing under this program. Energy officials concede that claimants who filed in the early days of the program may not have been provided enough information to understand the benefits they were filing for. As a consequence, some claimants who filed under both Subtitle B and Subtitle D early in the program later withdrew their claims under Subtitle D because they had intended to file only for Subtitle B benefits or because they had not understood that they would still have to file for state workers’ compensation benefits after receiving a positive determination from a physician panel. After the final regulations were published in August 2002, Energy officials stated that claimants had a better understanding of the benefits for which they were applying. Energy has not kept claimants sufficiently informed about the status of their claims under Subtitle D. Until recently, Energy’s policy was to provide no written communication about claims status between the acknowledgment letters it sent shortly after receiving applications and the point at which it began to process claims. Since nearly half of the claims filed in the first year of the program remained unprocessed as of the December 31, 2003, these claimants would have received no information about the status of their claims for more than 1 year. Energy recently decided to change this policy and provide letters at 6-month intervals to all claimants with pending claims. Although the first of these standardized letters sent to claimants in October 2003 did not provide information about individual claims status, it did inform claimants about a new service on the program’s redesigned Web site through which claimants can check on the status of their claim. However, this new capability does not provide claimants with information about the timeframes during which their claims are likely to be processed and claimants would need to re-check the status periodically to determine whether the status of the claim has changed. In addition, claimants may not receive sufficient information about what they are likely to encounter when they file for state workers’ compensation benefits. For example, Energy’s letter to claimants transmitting a positive determination from a physician panel does not always provide enough information about how they would go about filing for state workers’ compensation benefits. A contractor in Tennessee reported that a worker was directed by Energy’s letter received in September 2003 to file a claim with the state office in Nashville when Tennessee’s rules require that the claim be filed with the employer. The contractor reported the problem to Energy in the same month, but Energy letters sent to Tennessee claimants in October and December 2003 continued to direct claimants to the state office. Finally, claimants are not informed as to whether there is likely to be a willing payer of workers’ compensation benefits and what this means for the processing of that claim. Specifically, advocates for claimants have indicated that claimants may be unprepared for the adversarial nature of the workers’ compensation process when an insurer or state fund contests the claim. Energy officials recently indicated that they plan to test initiatives to improve communication with claimants. Specifically, they plan to conduct a test at one Resource Center that would provide claimants with additional information about the workers’ compensation process and advice on how to proceed after receiving a positive physician panel determination. In addition, they plan to begin contacting individuals with pending claims this summer to provide information on the status of their claims. Our analysis shows that a majority of cases associated with Energy facilities in 9 states that account for more than three-quarters of all Subtitle D cases filed are not likely to be contested. However, the remaining 20 percent of cases lack willing payers and are likely to be contested. These percentages provide an order of magnitude estimate of the extent to which claimants will have willing payers and are not a prediction of actual benefit outcomes for claimants. The workers’ compensation claims for the majority of cases associated with major Energy facilities in 9 states are likely to have no challenges to their claims for state workers’ compensation benefits. Specifically, based on analysis of workers’ compensation programs and the different types of workers’ compensation coverage used by the major contractors, it appears that slightly more than half of the cases will potentially have a willing payer. In these cases, self-insured contractors will not contest the claims for benefits as ordered by Energy. Another 25 percent of the cases, while not technically having a willing payer, have workers’ compensation coverage provided by an insurer that has stated that it will not contest these claims and is currently processing several workers’ compensation claims without contesting them. The remaining 20 percent of cases in the 9 states we analyzed are likely to be contested. Because of data limitations, these percentages provide an order of magnitude estimate of the extent to which claimants will have willing payers. The estimates are not a prediction of actual benefit outcomes for claimants. As shown in table 1, the contractors for four major facilities in these states are self-insured, and Energy’s direction to them to not contest claims that receive a positive physician panel determination will be adhered to. In such situations where there is a willing payer, the contractor’s action to pay the compensation consistent with Energy’s order to not contest a claim could result in a payment that might otherwise have resulted in a denial of a claim, for reasons such as failure to file a claim within a specified period of time. Similarly, the informal agreement by the commercial insurer with the contractors at the two facilities that constitute 25 percent of the cases to pay the workers compensation claims will more likely result in payment, despite potential grounds to contest under state law. However, since this insurer is not bound by Energy’s orders and it does not have a formal agreement with either Energy or the contractors to not contest these claims, there is nothing to guarantee that the insurer will continue to process claims in this manner. About 20 percent of cases in the 9 states we analyzed are likely to be contested. Therefore, in some instances, these cases may be less likely to receive compensation than a comparable case for which there is a willing payer, unless the claimant is able to overcome challenges to the claim. In addition, contested cases can take longer to be resolved. For example, one claimant whose claim is being contested by an insurer was told by her attorney that because of pretrial motions filed by the opposing attorney, it would be 2 years before her case was heard on its merits. Specifically, the cases that lack willing payers involve contractors that (1) have a commercial insurance policy, (2) use a state fund to pay workers’ compensation claims, or (3) do not have a current contract with Energy. In each of these situations, Energy maintains that its orders to contractors would have a limited effect. For instance, an Ohio Bureau of Workers’ Compensation official said that the state would not automatically approve a case with a positive physician panel determination, but would evaluate each workers’ compensation case carefully to ensure that it was valid and thereby protect its state fund. Furthermore, although the contractor in Colorado with a commercial policy attempted to enter into agreements with prior contractors and their insurers to not contest claims, the parties have not yet agreed and several workers’ compensation claims filed with the state program are currently being contested. These estimates could change as better data become available or as circumstances change, such as new contractors taking over at individual facilities. For example, the contractor currently performing environmental cleanup at the Paducah Gaseous Diffusion Plant will not re-compete for this work when its contract ends on September 30, 2004. Energy is soliciting proposals for a new contract to continue the cleanup work and has indicated that the new contractors will not be required to take on the responsibility for the workers’ compensation claims filed by employees of former contractors. While Energy has proposed that the current clean up contractor continue to handle the claims of their employees and those of prior contractors under another of its contracts with the agency, it is unclear at this point whether the current contractor will be able to arrange for continuing coverage of these claims without securing workers’ compensation coverage through commercial insurance. Unless the current contractor can continue to self-insure its workers’ compensation coverage for these claims, the Paducah cases shown in table 1 would have to be moved to the category in which contests are likely. As a result of this single change in contractors, the proportion of cases for which contests are likely could increase from 20 to 33 percent. In contrast to Subtitle B provisions that provide for a uniform federal benefit that is not affected by the degree of disability, various factors may affect whether a Subtitle D claimant is paid under the state workers’ compensation program or how much compensation will be paid. Beyond the differences in the state programs that may result in varying amounts and length of payments, these factors include the demonstration of a loss resulting from the illness and contractors’ uncertainty on how to compute compensation. Even with a positive determination from a physician panel and a willing payer, claimants who cannot demonstrate a loss, such as loss of wages or unreimbursed medical expenses, may not qualify for compensation. On the other hand, claimants with positive determinations but not a willing payer may still qualify for compensation under the state program if they show a loss and can overcome all challenges to the claim raised by the employer or the insurer. Contractors’ uncertainty about how to compute compensation may also cause variation in whether or how much a claimant will receive in compensation. While contractors with self-insurance told us that they plan to comply with Energy’s directives to not contest cases with positive determinations, some contractors were unclear about how to actually determine the amount of compensation that a claimant will receive. For example, one contractor raised a concern that no guidance exists to inform contractors about whether they can negotiate the degree of disability, a factor that could affect the amount of the workers’ compensation benefit. Other contractors will likely experience similar situations, as Energy has not issued written guidance on how to consistently compute compensation amounts. While not directly affecting compensation amounts, a related issue involves how contractors will be reimbursed for claims they pay. Energy uses several different types of contracts to carry out its mission, such as operations or cleanup, and these different types of contracts affect how workers’ compensation claims will be paid. For example, a contractor responsible for managing and operating an Energy facility was told to pay the workers’ compensation claims from its current operating budget. The contractor said that this procedure may compromise its ability to conduct its primary responsibilities. On the other hand, a contractor cleaning up an Energy facility under a cost reimbursement contract was told by Energy officials that its workers’ compensation claims would be reimbursed and, therefore, paying claims would not affect its ability to perform cleanup of the site. Various options are available to improve payment outcomes for the cases that receive a positive determination from Energy, but lack willing payers under the current program. If it chooses to change the current program, Congress would need to examine these options in terms of several issues, including the source, method, and amount of the federal funding required to pay benefits; the length of time needed to implement changes; the criteria for determining who is eligible; and the equitable treatment of claimants. In particular, the cost implications of these options for the federal government should be carefully considered in the context of the current and projected federal fiscal environment. We identified four possible options for improving the likelihood of willing payers, some of which have been offered in proposed legislation. While not exhaustive, the options range from adding a federal benefit to the existing program for cases that lack a willing payer to addressing the willing payer issue as part of designing a new program that would allow policymakers to decide issues such as the eligibility criteria and the type and amount of benefits without being encumbered by existing program structures. A key difference among the options is the type of benefit that would be provided. Option 1—State workers’ compensation with federal back up. This option would retain state workers’ compensation structure as under the current Subtitle D program but add a federal benefit for cases that receive a positive physician panel determination but lack a willing payer of state workers’ compensation benefits. For example, claims involving employees of current contractors that self-insure for workers’ compensation coverage would continue to be processed through the state programs. However, claims without willing payers such as those involving contractors that use commercial insurers or state funds likely to contest workers’ compensation claims could be paid a federal benefit that approximates the amount that would have been received under the relevant state program. Option 2—Federal workers’ compensation model. This option would move the administration of the Subtitle D benefit from the state programs entirely to the federal arena, but would retain the workers’ compensation concept for providing partial replacement of lost wages as well as medical benefits. For example, claims with positive physician panel determinations could be evaluated under the eligibility criteria of the Federal Employees Compensation Act and, if found eligible, could be paid benefits consistent with the criteria of that program. Option 3—Expanded Subtitle B program that does not use a workers’ compensation model. Under this option, the current Subtitle B program would be expanded to include the other illnesses resulting from radiation and toxic exposures that are currently considered under the Subtitle D program. The Subtitle D program would be eliminated as a separate program and, if found eligible, claimants would receive a lump- sum payment and coverage of future medical expenses related to the workers’ illnesses, assuming they had not already received benefits under Subtitle B. The Department of Labor would need to expand its regulations to specify which illnesses would be covered and the criteria for establishing eligibility for each of these illnesses. In addition, since the current programs have differing standards for determining whether the worker’s illness was related to his employment, it would have to be decided which standard would be used for the new category of illnesses. Option 4—New federal program that uses a different type of benefit structure. This option would address the willing payer issue as part of developing a new program that involves moving away from the workers’ compensation and Subtitle B structures and establishing a new federal benefit administered by a structure that conforms to the type of the benefit and its eligibility criteria. This option would provide an opportunity to consider anew the purpose of the Subtitle D provisions. As a starting point, policymakers could consider different existing models such as the Radiation Exposure Compensation Act, designed to provide partial restitution to individuals whose health was put at risk because of their exposure even when their illnesses do not result in ongoing disability. But, they could also choose to build an entirely new program that is not based on any existing model. In deciding whether and how to change the Subtitle D program to ensure a source of benefit payments for claims that would be found eligible if they had a willing payer, policymakers will need to consider the trade-offs involved. Table 2 arrays the relevant issues to provide a framework for evaluating the range of options in a logical sequence. We have constructed the sequence of issues in this framework in terms of the purpose and type of benefit as being the focal point for the evaluation, with consideration of the other issues flowing from that first decision. For example, decisions about eligibility criteria would need to consider issues relating to within- state and across-state equity for Subtitle D claimants. The framework would also provide for decisions on issues such as the method of federal funding—trust fund or increased appropriations—and the appropriate federal agency to administer the benefit. For each of the options, the type of benefit would suggest which agency should be chosen to administer the benefit and would depend, in part, on an agency’s capacity to administer a benefit program. In examining these issues, the effects on federal costs would have to be carefully considered. Ultimately, policymakers will need to weigh the relative importance of these issues in deciding whether and how to proceed. In evaluating how the purpose and type of benefit now available under Subtitle D could be changed, policymakers would first need to focus on the goals they wish to achieve in providing compensation to this group of individuals. If the goal is to compensate only those individuals who can demonstrate lost wages because of their illnesses, a recurring cash benefit in an amount that relates to former earnings might be in order and a workers’ compensation option, either a state benefits with a federal back up or a federal workers’ compensation benefit, would promote this purpose. If, on the other hand, the goal is to compensate claimants for all cases in which workers were disabled because of their employment—even when workers continue to work and have not lost wages–the option to expand Subtitle B would allow a benefit such as a flat payment amount not tied to former earnings. For consideration of a new federal program option, it might be useful to also consider other federal programs dealing with the consequences of exposure to radiation as a starting point. For example, the Radiation Exposure Compensation Act was designed to provide partial restitution to individuals whose health was put at risk because of their exposure. Similar to Subtitle B, the act created a federal trust fund, which provides for payments to individuals who can establish that they have certain diseases and that they were exposed to radiation at certain locations and at specified times. However, this payment is not dependent on demonstrating ongoing disability or actual losses resulting from the disease. The options could also have different effects with respect to eligibility criteria and the equity of benefit outcomes for current Subtitle D claimants based on these criteria. By equity of outcomes, we mean that claimants with similar illnesses and circumstances receive similar benefit outcomes. The current program may not provide equity for all Subtitle D claimants within a state because a claim that has a willing payer could receive a different outcome than a similar claim that does not have a willing payer, but at least three of the options could provide within-state equity. With respect to across-state equity, the current program and the option to provide a federal back up to the state workers’ compensation programs would not achieve equity for Subtitle D claimants in different states. In contrast, the option based on a federal workers’ compensation model as well as the expanded Subtitle B option would be more successful in achieving across-state equity. Regardless of the option, changes made to Subtitle D could also potentially result in differing treatment of claims decided before and after the implementation of the change. In addition, changing the program to remove the assistance in filing workers’ compensation claims may be seen as depriving a claimant of an existing right. Further, any changes could also have implications beyond EEOICPA, to the extent that the changes to Subtitle D could establish precedents for federal compensation to private sector employees in other industries who were made ill by their employment. Effects on federal costs would depend on the generosity of the benefit in the option chosen and the procedures established for processing claims for benefits. Under the current program, workers’ compensation benefits that are paid without contest will come from contract dollars that ultimately come from federal sources—there is no specific federal appropriation for this purpose. Because all of the options are designed to improve the likelihood of payment for claimants who meet all other criteria, it is likely that federal costs would be higher for all options than under the current program. Specifically, federal costs would increase for the option to provide a federal back up to the state workers’ compensation program because it would ensure payment at rates similar to the state programs for the significant minority of claimants whose claims are likely to be contested and possibly denied under the state programs. Further, the federal costs of adopting a federal workers’ compensation option would be higher than under the first option because all claimants—those who would have been paid under the state programs as well as those whose claims would have been contested under the state programs—would be eligible for a federal benefit similar to the benefit for federal employees. In general, federal workers’ compensation benefits are more generous than state benefits because they replace a higher proportion of the worker’s salary than many states and the federal maximum rate of wage replacement is higher than all the state maximum rates. For either of the two options mentioned earlier, a decision to offset the Subtitle D benefits against the Subtitle B benefit could lessen the effect of the increased costs, given reports by Energy officials that more than 90 percent of Subtitle D claimants have also filed for Subtitle B benefits. However, the degree of this effect is difficult to determine because many of the claimants who have filed under both programs may be denied Subtitle B benefits. The key distinction would be whether workers who sustained certain types of illnesses based on their Energy employment should be compensated under both programs as opposed to recourse under only one or the other. If they were able to seek compensation from only one program, the claimant’s ability to elect one or the other based on individual needs should be considered. The effects on federal cost of an expanded Subtitle B option or a new federal program option are more difficult to assess. In many cases, the Subtitle B benefit of up to $150,000 could exceed the cost of the lifetime benefit for some claimants under either of the workers’ compensation options, resulting in higher federal costs. However, the extent of these higher costs could be mitigated by the fact that many of the claimants who would have filed for both benefits in the current system would be eligible for only one cash benefit regardless of the number or type of illnesses. The degree of cost or savings would be difficult to assess without additional information on the specific claims outcomes in the current Subtitle B program. The effects on federal costs for the new federal program option would depend on the type and generosity of the benefit selected. More than 3 years after the passage of EEOICPA, few claimants have received state workers’ compensation benefits as a result of assistance provided by Energy. While Energy has eliminated the bottleneck in its claims process that it encountered early in program implementation—the initial development of cases—in doing so it has created a growing backlog of cases awaiting review by a physician panel. In the absence of changes that would expedite this review, many claimants will likely wait years to receive the determination they need to pursue a state workers’ compensation claim. In the interim, their medical conditions may worsen, and claimants may even die before they receive consideration by a state program. While Energy has taken some steps designed to reduce the backlog of cases for the physician panels, it is too early to assess whether these initiatives will be sufficient to resolve this growing backlog. Whether they ultimately receive positive or negative determinations, claimants deserve complete and timely information about what they could achieve in filing under this program, what the claims process entails, the status of their claims, and what they are likely to encounter when they file for state workers’ compensation benefits. Without complete information, claimants are unable to weigh the benefits and risks of pursuing the process to its conclusion. Indeed, given that the majority of claimants have also filed for benefits under Subtitle B and many may have already received decisions on those claims, some claimants may not be aware that they still have a Subtitle D claim pending. Further, given the limited communication from Energy since their claims were filed, some claimants may be unaware that resources are being expended developing their claims. Finally, because Energy does not currently communicate to claimants what they are likely to encounter when they file for state benefits, claimants may be unprepared for what may be a difficult and protracted pursuit of state benefits. Energy may be hindered in its ability to improve its claims process and evaluate the quality of the assistance it is providing to claimants in this program using the data it currently collects. Energy may also be unprepared to provide the analysis needed to inform policymakers as they consider whether changes to the program are needed because it does not systematically track the outcomes of cases that are appealed or the outcomes of claims that are filed with state workers’ compensation programs. Finally, Energy will be limited in its ability to provide complete and accurate information to claimants regarding the status and outcomes of their claims without good data. Even if all claimants were to receive timely physician panel determinations stating that the workers’ illnesses had likely been caused by their employment with Energy, some may never receive state workers’ compensation benefits. The lack of a willing payer may delay the receipt of benefits for some claimants as insurers and state fund officials challenge various issues aspects of the claim. For other claimants, the challenges raised in the absence of willing payers may ultimately result in denial of benefits based on issues such as not filing the claim within the time limits set by the state program—issues that would not be contested by willing payers. This disparity in potential outcomes for Subtitle D claimants may warrant the consideration of changes to the current program to ensure that eligible claims are paid without undue delay and that there is a willing payer for all claimants who would otherwise be eligible. To improve Energy’s effectiveness in assisting Subtitle D claimants in obtaining compensation for occupational illnesses, we recommend that the Secretary of Energy: in order to reduce the backlog of cases waiting for review by a physician panel, take additional steps to expedite the processing of claims though its physician panels and focus its efforts on initiatives designed to allow the panels to function more efficiently. For example, Energy should pursue the completion of site reference data to provide physicians with more complete information about the type and degree of toxic exposures that may have occurred at each Energy facility. in order to provide claimants with more complete information, expand and expedite its plans to enhance communications with claimants. These plans should focus on providing more complete information describing the assistance Energy will provide to claimants, the timeframes for claims processing, the status of claims, and the process that claimants will encounter when they file claims for state workers’ compensation benefits. in order to facilitate program management and oversight, develop cost- effective methods for improving the quality of the data in its case management system and increasing its capabilities to aggregate these data to address program issues. In addition, Energy should develop and implement plans to track the outcomes of cases that progress through the state workers’ compensation systems and use this information to evaluate the quality of the assistance it provides to claimants in the Subtitle D program. Such data could also be used by policy makers to assess the extent to which this program is achieving its goals and purposes. in order to reduce disparities in potential outcomes between claimants with and without willing payers, consider developing a legislative proposal for modifying the EEOICPA statute to address the willing payer issue. When assessing different options, several issues such as those discussed in this report should be considered, including the purpose and type of benefit, eligibility criteria and equity of benefit outcomes, and effects on federal costs. We provided a draft of this report to Energy for comment. In commenting on the draft report, Energy indicated that the agency had already incorporated several of our recommendations and will aggressively tackle the remainder. However, Energy did not specifically comment on each recommendation. In addition, the comments highlighted several initiatives either planned or underway that are designed to improve the Subtitle D program. Several of these initiatives address issues raised in our report for which we recommended changes. In particular, Energy agreed with our findings regarding problems with communications with Subtitle D claimants and outlined the steps the agency has planned to correct these problems. Further, Energy agreed with our finding that there was not a system in place to track the outcomes of workers’ compensation claims filed with the state programs and indicated that the agency has recently initiated such a system, as we recommended. Finally, the comments provide more recent information about the agency’s progress in processing Subtitle D claims and reiterate the agency’s plan for eliminating the backlog of claims by 2006. Energy’s comments are provided in appendix II. Energy also provided technical comments, which we have incorporated as appropriate. Copies of this report are being sent to the Secretary of Energy, appropriate congressional committees, and other interested parties. The report will also be made available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. To determine the number of cases filed under Subtitle D, the status of these cases and characteristics of claimants, we used administrative data from Energy’s Case Management System (CMS). Energy does not publish standardized data extracts from this system, so we requested that Energy query the system to provide customized extracts for our analysis. The first extract contained data on the status and characteristics of cases filed between July 2001 and June 30, 2003. The second extract was obtained as an update and contained data related to cases filed between July 2001 and December 31, 2003. Because multiple claims can be associated with a single case, Energy’s system contains data at two levels—the case level and the claim level. For example, if both the widow and child of a deceased Energy employee file claims, both claims will be associated with a single case, which is linked to the Energy employee. At the case level, the system contains information about the Energy employee, such as date of birth and date of death (if applicable), the facilities at which the employee worked, and their dates of employment and the status of the case as it moves through the development process in preparation for physician panel review. At the claim level, CMS contains information related to the individual claimants, such as the date the claim was signed and the claimant’s relationship to the Energy employee. The extracts provided by Energy contain case-level data, for the most part. Data elements that are collected at the claim level were reported at the case level in our files. For example, the system includes a claim signature date for each claim. In our case-level file, Energy provided the earliest signature date, so that we would know when the first claim was signed. Illness data are also collected at the claim level. In our case-level file, Energy provided all the illnesses claimed by all claimants. We then aggregated the illness data to determine which illnesses were claimed on each case. We interviewed key Energy officials and contractors and reviewed available system documentation, such as design specifications and system update documents. Once the first data extract was received from Energy, we tested the data set to determine that it was sufficiently reliable for our purposes. Specifically, we performed electronic testing to identify missing data or logical inconsistencies and reviewed determination letters for cases that had physician panel determinations. We then computed descriptive statistics, including frequencies and cross-tabulations, to determine the number and status of cases received as of June 30, 2003. When we received the second data extract, containing data through the end of calendar year 2003, we matched it to the first one to determine how many additional cases had been received between July 1, 2003, and December 31, 2003, and to determine if any cases were missing. We determined that some cases (less than 2 percent) that had been in the first extract were missing from the second file. We consulted with Energy contractors and determined that one case had been accidentally omitted from the query results and that the remaining cases had been dropped from CMS because they were duplicate cases or had been determined to be non-Subtitle D cases. This is possible because the Resource Centers use the CMS system to document incoming cases for both Subtitle B and Subtitle D. Energy contractors provided a replacement file that included the case that had been inadvertently dropped. They also reported that there were still a small number of duplicate cases identified in CMS, and hence in our data extract, but that Energy had not yet decided which cases to retain. Since Energy officials had not yet decided which case records to retain and which to delete at the time of our extract, we decided to leave the cases identified as duplicates in our analysis file. We reviewed available system documentation, performed electronic testing and reviewed determination letters for cases that had physician panel determinations to determine that the data contained in the second extract was sufficiently reliable for our purposes. During our electronic testing, we discovered a discrepancy between the December 31, 2003, status information included in our file and the December 31, 2003, status information reported by Energy on its Web site. On further discussion with Energy officials and contractors, we determined that when running the query, Energy contractors had calculated the December 31, 2003, status information using the wrong field in the database. Energy contractors gave us a third data file containing the correct status information that we then appended to the analysis file. We then computed additional descriptive statistics, including frequencies and cross-tabulations to determine the number and status of cases received as of December 31, 2003. To determine the extent to which Energy policies and procedures help employees file timely claims for state workers’ compensation benefits, we reviewed Energy’s regulations, policies, procedures, and communications with claimants. In addition, we interviewed key Energy officials and contractors at Energy facilities. We also interviewed panel physicians and contractors responsible for case development. In addition, we interviewed advocates, claimants, and officials at the National Institute for Occupational Safety and Health. Finally, we conducted site visits to three Energy facilities in Oak Ridge, Tennessee—the state accounting for the largest number of Subtitle D cases. To estimate the number of claims for which there will not be willing payers of workers’ compensation benefits, we reviewed the provisions of workers’ compensation programs in the 9 states that account for more than three-quarters of the cases filed. The 9 states are: Colorado, Idaho, Iowa, Kentucky, New Mexico, Ohio, South Carolina, Tennessee, and Washington. The results of our analysis cannot necessarily be applied to the remaining 25 percent of the cases filed nationwide. Because of data limitations, we assumed that: (1) all cases filed would receive a positive determination by a physician panel; (2) all workers lost wages because of the illness and were not previously compensated for this loss; and (3) in all cases, the primary contractor rather than a subcontractor at the Energy facility employed the worker. While we believe that the first two assumptions would not affect the proportions shown in each category, the third assumption could result in an underestimate of the proportion of cases lacking willing payers to the extent that some workers may have been employed by subcontractors that used commercial insurers or state funds for workers’ compensation coverage. Some subcontractors use these methods of workers’ compensation coverage because they may not employ enough workers to qualify for self-insurance under some state workers’ compensation programs. We also interviewed Energy officials, key state workers’ compensation program officials, workers’ compensation experts, private insurers, and the contractors operating the major facilities in each of the states to determine the method of workers’ compensation coverage these facilities used. Finally, we took several steps to identify possible options for changing the program in the event that there may not be willing payers of benefits. We reviewed existing laws, regulations, and programs; analyzed pending legislation; and considered characteristics of existing federal and state workers’ compensation programs. We also identified the issues that would be relevant for policy makers to consider in implementing these options. In addition to the above contacts, Melinda L. Cordero, Mary Nugent, and Rosemary Torres Lerma made significant contributions to this report. Also, Luann Moy and Elsie Picyk assisted in the study design and analysis; Margaret Armen provided legal support; and Amy E. Buck assisted with the message and report development.
|
Subtitle D of the Energy Employees Occupational Illness Compensation Program Act of 2000 allows the Department of Energy (Energy) to help its contractors' employees file state workers' compensation claims for illnesses determined by a panel of physicians to be caused by exposure to toxic substances while employed at an Energy facility. Congress mandated that GAO study the effectiveness of the benefit program under Subtitle D. GAO focused on four key areas: (1) the number, status, and characteristics of claims filed with Energy; (2) the extent to which Energy policies and procedures help employees file timely claims for these state benefits; (3) the extent to which there will be a "willing payer" of workers' compensation benefits, that is, an insurer that--by order from or agreement with Energy--will not contest these claims; and (4) a framework that could be used for evaluating possible options for changing the program. During the first 2 1/2 years of the program, ending December 31, 2003, Energy had completely processed about 6 percent of the more than 23,000 cases that had been filed. Energy had begun processing nearly 35 percent of the cases, but processing had not yet begun on nearly 60 percent of the cases. Further, insufficient strategic planning and systems limitations complicate the assessment of Energy's achievement of goals related to case processing, as well as goals related to program objectives, such as the quality of the assistance provided to claimants in filing for state workers' compensation. While Energy got off to a slow start in processing cases, it is now processing enough cases that there is a backlog of cases waiting for review by a physician panel. Energy has taken some steps intended to reduce this backlog, such as reducing the number of physicians needed for some panels. Nonetheless, a shortage of qualified physicians continues to constrain the agency's capacity to decide cases more quickly. Consequently, claimants will likely continue to experience lengthy delays in receiving the determinations they need to file workers' compensation claims. In the meantime, Energy has not kept claimants sufficiently informed about the delays in the processing of their claims as well as what claimants can expect as they proceed with state workers' compensation claims. GAO estimates that more than half of the cases associated with Energy facilities in 9 states that account for more than three-quarters of all Subtitle D cases filed are likely to have a willing payer of benefits. Another quarter of the cases in these 9 states, while not technically having a willing payer, have workers' compensation coverage provided by an insurer that has stated that it will not contest these claims. However, the remaining 20 percent of the cases in these 9 states lack willing payers and are likely to be contested. This has created concerns about program equity in that many of these cases may be less likely to receive compensation. Because of data limitations, these percentages provide an order of magnitude estimate of the extent to which claimants will have willing payers. These estimates could change as better data become available or as circumstances change, such as new contractors taking over at individual facilities. The estimates are not a prediction of actual benefit outcomes for claimants. Various options are available to improve payment outcomes for the cases that receive a positive physician panel determination, but lack willing payers. While not recommending any particular option, GAO provides a framework that includes a range of issues to help the Congress assess options if it chooses to change the current program. One of these issues in particular--the federal cost implications--should be carefully considered in the context of the current and projected federal fiscal environment.
|
USDA provides various programs that assist farmers and landowners through its subsidiary agencies FSA, NRCS, and RMA. FSA provides benefits to farmers through various programs, including farm commodity and crop disaster assistance programs authorized in the 2008 Farm Bill. FSA has overall responsibility for administering these programs, including ensuring that all recipients meet eligibility requirements and do not receive payments in excess of program limitations. Farming operations—whether individuals or entities—applying for benefits must file a farm operating plan and an annual acreage report with their local field office, and if any changes occur that could affect program eligibility, such as changes affecting one’s status as actively engaged in farming, the farming operation must file a revised farm operating plan. These documents record farming information, such as the name of each individual with an interest in the farming operation, which crops are planted on each field, and the farming practices used. FSA uses this information to determine farm program payments, including payments for various agricultural disaster assistance programs. Most farmers receive farm program payments directly from FSA as individual operators. Some farmers, however, use legal entities to organize their farming operations, thereby reducing their exposure to financial liabilities. Many FSA farm programs have statutory payment limits that set the maximum payment amount that an individual can receive per year. For example, for one of FSA’s farm commodity programs, the annual payment limit is $40,000 per individual. FSA carries out these responsibilities through its headquarters office, 50 state offices, and over 2,100 field offices. In a 2007 report, we recommended that FSA implement management controls, such as matching payment files with SSA’s death master file, to verify that an individual receiving payments has not died and to provide reasonable assurance that the agency does not make improper payments to deceased individuals. We also recommended that FSA determine if improper program payments have been made to deceased individuals or to entities that failed to disclose the death of a member and, if so, recover the appropriate amounts. The 2008 Farm Bill required FSA (1) to promulgate regulations that (a) describe the circumstances under which payments may be issued in the name of a deceased individual and (b) preclude the issuance of payments to, and on behalf of, deceased individuals who were not eligible for the payments and (2) to at least twice each year reconcile SSA’s numbers of all individuals who receive commodity program payments. NRCS administers voluntary programs that offer financial and technical assistance to eligible landowners and producers to help manage natural resources in a sustainable manner. Through these programs, the agency approves conservation program contracts or conservation easements, which provide participants financial assistance for planning and implementing conservation practices to save energy and improve resources, such as soil and water on agricultural lands and on nonindustrial private forestland. These programs can be divided into two categories—working-lands programs and easement programs.working-lands programs, NRCS provides program participants with financial and technical assistance through contracts, which are generally in effect for up to 10 years, to implement conservation practices on agricultural land and nonindustrial private forestland. Under easement programs, NRCS purchases conservation easements to restore, protect, or enhance grasslands and wetlands and provides assistance to eligible entities to purchase development rights to keep productive farm and ranch lands in agricultural uses. According to NRCS officials, when an NRCS program participant dies, NRCS considers the contract canceled, unless the participant had identified an executor or other estate Under representative to act on his or her behalf to transfer the contract to an eligible successor or to complete the contracted activities. By promoting crop insurance, the federal government has played an active role in helping to mitigate the effects on income of production risks—droughts, floods, and other natural disasters—as well as price risks that farmers face. RMA administers the federal crop insurance program, including controlling costs and protecting against fraud, waste, and abuse. The agency partners with 17 approved insurance companies, which sell and service the federal program’s insurance policies and share in a percentage of the associated risk of loss and opportunity for gain. The federal government subsidizes about 60 percent of the insurance premiums the insurance companies charge farmers and also pays the companies an allowance for administrative expenses, which is intended to cover the companies’ expenses for selling crop insurance policies and providing customer service to policyholders. Unlike payments made by FSA and NRCS, subsidies for crop insurance premiums are not paid directly to policyholders but can be considered a financial benefit to them. Without a premium subsidy, a policyholder would have to pay the full amount of the premium. The allowances for administrative expenses can be considered a further benefit to these policyholders, since these allowances are paid on their behalf. RMA also shares with insurance companies payments made on policyholders’ claims for losses. Since 2007, FSA has established procedures for preventing improper payments to deceased individuals, including matching payments to program participants with SSA’s data on deceased individuals. Nevertheless, the SSA data used by FSA for this match have been incomplete, and our review of a sample of payments made to deceased individuals raised some questions. Since our 2007 report and enactment of the 2008 Farm Bill, FSA has taken steps to implement and strengthen procedures to prevent improper payments to program participants who have died. Specifically, in 2007, FSA began computer matching its payment data with a list of deceased individuals, directed county and state offices to review the results of this match, and established procedures and updated guidance accordingly. According to FSA officials, to identify program participants who had died, in fiscal year 2007 FSA began computer-matching program participants’ names, addresses, and Social Security numbers—stored in FSA’s primary database of farm program participants—against SSA’s death master file. This match identifies any program participants who have died. FSA then compares this list of deceased program participants against its list of participants to whom payments were made during the previous quarter and creates a report listing deceased participants who have received payments. In 2008 FSA began performing these matches quarterly—twice a year more than required under the 2008 Farm Bill; FSA also directed its county and state offices to review these reports to determine whether payments were proper or improper. Every quarter, county officials code each payment to a deceased participant as proper or improper, and state officials review and verify the counties’ coding. According to FSA officials, most payments found to be improper are to be recovered by FSA. By the end of December 2010, FSA had issued a final rule clarifying the regulations governing payments earned by a person who died and describing strengthened procedures for matching program participants with SSA’s death master file and having county, then state, offices check which payments to deceased individuals were proper and improper. The agency further improved this process in 2011, by moving from electronic spreadsheets to a web-based system, which, officials told us, greatly improved data accuracy and ease of reviewing and coding payments made to deceased individuals. In addition to implementing this quarterly data-matching and review process, FSA has revised and updated its handbooks since the 2008 Farm Bill to include guidance related to making payments in cases where a program participant has died. The revised guidance defines the respective roles to be played by county and state offices, clarifies how county offices are to record payments to deceased individuals as proper or improper, and offers guidance for both county and state offices reviewing these records. Agency officials may also take other steps to find out about deceased program participants: in some counties, they may check obituaries in local papers or telephone program participants each year to ask if any changes have occurred among eligible participants. Overall, according to FSA officials, establishing these procedures has enabled the agency to identify thousands of individuals—17,409 in fiscal year 2011 and 13,684 in fiscal year 2012, for example—who were paid after their dates of death. Of a total of 28,613 deceased individuals who were paid in 2011 and 2012, FSA determined that 1,799 individuals, or about 6 percent, were paid a total of $3.3 million in improper payments during the 2-year period. According to figures from FSA’s DMF Review Report for fiscal years 2011 and 2012, FSA has recovered approximately $1 million of these improper payments and, according to agency officials, continues to pursue the remaining amount of improper payments. The version of SSA’s death master file against which FSA matches its payment records has been incomplete. Specifically, FSA has been matching its payment records against the public version of the death master file. This version—containing about 87 million death records and available to the public for purchase—lists all deaths since 1936 that have been reported to SSA by sources other than the states, such as hospitals and funeral homes. According to SSA documentation, to ensure confidentiality under section 205(r) of the Social Security Act, death records provided to the administration by the states are not to be publicly disclosed, except to other agencies that pay federally funded benefits. In May 2013, we testified before the U.S. Senate’s Committee on Homeland and Governmental Affairs that according to SSA officials, the complete death master file contains approximately 98 million records, about 11 million more records than the public version of the file. Therefore, until the agency begins matching its payment records to the complete death master file, it may continue to miss deceased individuals to whom it should no longer be making payments. During our review, FSA took steps to seek access to the complete death master file, and in January 2013, FSA received approval from SSA to obtain such access, along with NRCS and RMA. FSA officials said that they have been coordinating with NRCS and RMA on how they will use, share, and pay for this access. As of early June 2013, however, FSA had not received the complete file from SSA or incorporated these data into its quarterly reviews of payments made to deceased individuals. In addition, our review of FSA’s payment files raised questions about the state and county offices’ coding and review of payments as proper or improper. We examined a generalizable random sample of 100 payments that were made to deceased individuals over a 1-year period beginning in April 2011 and coded as proper, and we estimated that FSA county offices coded 91 percent of these payments correctly. On the basis of supporting evidence FSA provided us, we found 9 payments that did not have sufficient support to be coded as proper. For 4 of these 9 payments, the supporting documentation bore the signature of the deceased individual or a representative but was dated after the individual’s death. For example, a deceased individual in Alabama received a payment of $4,273 in 2011. County officials coded this payment as proper, but our review identified that some program eligibility documents had been signed and dated in the deceased individual’s name more than 6 months after the individual had died. We spoke with the relevant county and state officials about this case, and they agreed that this payment should have been coded as improper. An agency official told us that the deceased individual’s heirs submitted correct paperwork for the next fiscal year and that FSA did not attempt to recover the 2011 payment. The remaining 5 payments, all made through one program and paid to deceased individuals after their dates of death, were coded by counties as proper but were actually improper. According to FSA officials, the statutory requirements for this program differ from those of most FSA programs in that they do not permit payments to a deceased individual after the date of death under any circumstances, and any such payment must be recovered. Officials told us that under this program, even if payments go to the correct heirs, such as a spouse, they are still considered improper until the right to receive payment has been transferred to an heir. The officials told us that some counties nevertheless code such payments as proper while awaiting completion of transfer paperwork, even though the payments are actually improper; the officials agreed that the 5 payments we found were indeed improper. According to an FSA official, in May 2012, FSA updated its guidance related to this program, including how to handle payments when a participant dies, and reminded relevant officials of the proper procedures. The 95 percent confidence interval for this estimate is (4,16). effectiveness of the quarterly review process.process has largely enabled the agency to identify thousands of individuals who were paid after their dates of death. Nevertheless, if some coded payments were revisited, perhaps annually, to ensure that documentation supported the coding of payments as proper, the error rate could be further lowered, particularly for programs or county offices where errors were previously identified. NRCS does not have procedures to prevent improper payments to deceased individuals, and its ability to verify whether payment recipients are dead or alive is limited. As a result, the agency cannot be certain that payments it made to over a thousand deceased individuals are proper. According to NRCS officials we spoke with, not all conservation payments to deceased individuals are improper because they may have been made for work performed before the individuals died, or they were associated with easement contracts that became part of the deceased individuals’ estates and remained linked to their Social Security numbers. In addition, the officials believe that the risk of the agency’s making improper payments to deceased individuals is low, in part because NRCS conservation programs have certain built-in protections that help prevent such payments. The officials also explained that NRCS staff frequently interact with conservation program participants at the sites where projects are located. NRCS officials develop a conservation plan for the site in consultation with participants and typically revisit sites to certify implementation and discuss any deficiencies. As a result, they have regular opportunities to become aware of a participant’s death. Further, NRCS officials commented, counties the agency operates in are often small enough for local officials to know most program participants personally, and thus they would know if a participant died. Moreover, NRCS officials stated that the agency requires that heirs of the deceased notify the agency within 60 days of transferring property to another owner because of death or other reasons. NRCS officials acknowledged, however, that they may be unaware of the death of individuals who receive payments as members of an entity, because officials have less contact with members of an entity than with individual program participants. Thus, NRCS may be unaware that a member of an entity has died if other entity members do not notify the agency. NRCS’s built-in protections are limited, however, because the agency does not systematically verify whether its program participants have died, such as by matching participants’ Social Security numbers against SSA’s death master file. Under a memorandum of agreement with FSA, such a match is to be performed by FSA, and FSA is to provide NRCS with a list of “current year program payment recipients” who are deceased. FSA officials, however, told us that because they do not know which program participants have been paid by NRCS, they have not provided such a list to NRCS. FSA officials said they are working on a new memorandum of agreement (the previous one expired at the end of fiscal year 2012), in which they hope each agency’s responsibilities will be better defined, to enable FSA to provide the matching service. Moreover, FSA and NRCS officials said that the agencies are also coordinating with each other to acquire SSA’s complete death master file. To examine whether NRCS was making potentially improper payments to deceased individuals, we used program participants’ Social Security numbers to match program payment data the agency provided to us for fiscal year 2008 through April 2012 against SSA’s complete death master file. In so doing, we estimate that NRCS made $10.6 million in payments on behalf of 1,103 individuals 1 year or more after death. To better understand the reasons for payments that appeared to have been made on behalf of deceased individuals 1 or more years after death, we presented NRCS with six sample cases of such payments, and NRCS explained that four of the six cases were proper payments. For example, in one case, a data entry error appears to have linked the Social Security number of an individual who died in 2001 with multiple program payments made from 2008 through 2012—payments that had not been made to the deceased individual but in actuality, to the appropriate living participants. In another case, NRCS made two direct deposits into the bank account of a participant in a working-lands program 21 months and 32 months after the participant’s death. No one had notified the agency of the participant’s death or transferred the program contract to his legal heirs, although according to agency officials, the terms of his will did transfer his property to his widow. In another case, however, NRCS did not have proper signatures on a program contract for two payments made in 2008, and agency officials acknowledged that they would not have paid participants had they been aware of this error. Without procedures such as matching its program participants with the death master file and reviewing those matches, NRCS does not know how many payments it made on behalf of deceased individuals, how often, or in what amounts. Under the standards for internal control in the federal government, agencies are to clearly document internal control in the form of management directives, administrative policies, or operating manuals, and the documentation should be readily available for examination. Furthermore, without reviewing each payment apparently made to a deceased individual, NRCS cannot know whether such payments were proper or improper. Under the standards for internal control in the federal government, monitoring is to be performed continually and include regular management and supervisory activities, comparisons, and reconciliations, which could identify potentially improper payments to deceased individuals. These standards indicate that monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. RMA does not have procedures in place to prevent improper subsidies on behalf of deceased individuals. Specifically, the agency does not systematically verify its policyholders by matching these policyholders’ information against SSA’s death master file. As a result, RMA may have provided potentially improper crop insurance subsidies and administrative allowances on behalf of thousands of deceased individuals. According to agency officials, however, the crop insurance cycle provides one or more opportunities each year to verify that an individual on whose behalf a subsidy or allowance is paid is alive: policyholders are required to provide a signed certification when filing their annual reports on production and yield and provide a written signature if they file a claim. RMA officials also told us that in 2007—on a one-time basis—the agency compared policyholders with the public death master file and, working with its partner insurance companies, updated and corrected records in its central database of participants, claims, and subsidies, all of which are linked by Social Security number. RMA officials commented that they have not compared the agency’s database of policyholders with the public death master file since then because they learned that this file does not have complete information on deceased individuals. Along with FSA and NRCS, however, RMA received approval from SSA for access to the complete file and, according to RMA officials, is coordinating with the other two agencies to begin using it. To determine whether some of RMA’s subsidies and allowances may have been provided on behalf of policyholders who were deceased, we matched policyholders’ Social Security numbers from RMA’s crop insurance subsidies and administrative allowance data for crop reinsurance years 2008 through 2012 against SSA’s complete death master file. We found that approximately $22 million in subsidies and allowances may have been provided on behalf of 3,434 policyholders 2 or more reinsurance years after death. To better understand the reasons for subsidies that appeared to have been made to policyholders after death, we discussed with RMA five sample cases of such subsidies. RMA believed that all were proper, explaining that several were due to errors, such as incorrect Social Security numbers, in RMA’s database. Without reviewing each subsidy that could have been made to a deceased individual and reconciling it with SSA’s complete death master file, however, RMA cannot know whether such subsidies were proper or improper. Without such reconciliations or matches, RMA is not employing monitoring specified in federal internal control standards. The Agricultural Risk Protection Act of 2000, Pub. L. No. 106-224, 114 Stat. 358, amended the Federal Crop Insurance Act. We have evaluated RMA’s data-mining activities in several reports: see GAO-12-256; GAO, Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable, GAO-07-944T (Washington, D.C.: June 7, 2007); Crop Insurance: More Needs to Be Done to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse, GAO-06-878T (Washington, D.C.: June 15, 2006); and Crop Insurance: Actions Needed to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse, GAO-05-528 (Washington, D.C.: Sept. 30, 2005). on behalf of individuals who may be deceased. Under the standards for internal control in the federal government, agencies are to clearly document internal control in the form of management directives, administrative policies, or operating manuals, and the documentation should be readily available for examination.policyholders, RMA may be unable to rely on results from data mining and therefore be less likely to detect fraudulent, wasteful, or abusive crop insurance claims. Since our July 2007 report and the enactment of the 2008 Farm Bill, FSA has taken important steps to prevent improper payments to deceased individuals. We commend the agency for exceeding the 2008 Farm Bill’s requirement that it match its payment records against SSA’s death master file twice a year—the agency performs this match quarterly—and for reviewing all payments made to deceased individuals to determine whether they were proper or improper. In addition, FSA, NRCS, and RMA began working together during our review to acquire SSA’s complete death master file, an effort we also commend, and they have received approval from SSA for access. FSA’s quarterly review process has largely enabled the agency to identify thousands of individuals who were paid after their dates of death. Nevertheless, under current procedures, FSA may not verify all improper payments to deceased individuals, and an error rate of about 9 percent in coding payments could persist. As FSA conducts its reviews, if it employs ongoing monitoring activities, such as reconciliations, to ensure that county offices’ coding of payments is supported by documentation, the error rate could be reduced. Furthermore, until and unless NRCS and RMA develop and implement procedures to have their payment or subsidy data records matched against SSA’s complete death master file, either through coordination with FSA or on their own, these agencies cannot know if they are providing payments to, or subsidies on behalf of, deceased individuals; how often they are providing such payments or subsidies; or in what amounts. Without such procedures, NRCS and RMA are not employing internal controls specified in federal standards. And without reviewing each payment to or subsidy provided on behalf of a deceased individual, the agencies do not know if each payment or subsidy is proper or improper. In addition, without accurate information on policyholders, RMA may be unable to rely on results from data mining. We are making the following three recommendations to the Secretary of Agriculture: To further strengthen FSA’s procedures for preventing improper payments to deceased individuals, we recommend that the Secretary of Agriculture direct the Administrator of FSA to employ ongoing monitoring activities, such as reconciliations, to ensure that county offices’ coding of payments is supported by documentation. To help NRCS prevent improper payments to deceased individuals, we recommend that the Secretary of Agriculture direct the Chief of NRCS to develop and implement procedures to prevent potentially improper payments to deceased individuals, including (1) coordinating roles and responsibilities with FSA to ensure that either FSA or NRCS matches NRCS payment files against SSA’s complete death master file and (2) reviewing each payment to a deceased individual to ensure that an improper payment was not made. To help RMA prevent improper crop insurance subsidies on behalf of deceased individuals and to improve the effectiveness of its data mining, we recommend that the Secretary of Agriculture direct the Administrator of RMA to develop and implement procedures to prevent potentially improper subsidies on behalf of deceased individuals, including (1) matching RMA’s crop insurance records against SSA’s complete death master file and (2) reviewing each subsidy provided on behalf of a deceased individual to ensure that an improper subsidy was not provided. We provided a draft of this report to USDA for review and comment. USDA provided written comments, which are summarized below and reproduced in appendix II. In its comment letter, USDA generally agreed with our report’s findings and recommendations but stated that it believes we inaccurately represented NRCS and RMA as having no procedures in place to identify deceased participants. USDA stated that it believes the agencies’ normal operating procedures provide opportunities to identify deceased participants. In its comment letter, USDA stated that for NRCS conservation easement programs, transferring property rights involves a title escrow agent, thereby providing an opportunity to determine whether any individuals identified on a deed of trust have died. For RMA, the letter stated that the agency’s preliminary analysis of sample cases of deceased individuals that we provided suggests that the potential scope of remaining questionable payments to deceased individuals is more limited than we reported. In addition to the fact that someone is paying the premium each year, the letter stated, the crop insurance cycle provides one or more opportunities each year to verify that an individual on whose behalf a subsidy or allowance is paid is alive—information we have incorporated into our report. Although NRCS and RMA can identify some deceased participants during their normal operations, we do not believe that identifying deceased individuals during normal operations is a reliable substitute for having a systematic process. As we noted in the report, the agencies have not had specific procedures in place to verify and prevent potentially improper payments to deceased individuals. Indeed, in our review, we found that, without reviewing whether these payments or subsidies were proper, NRCS made payments to more than 1,000 deceased individuals in fiscal year 2008 through April 2012 and that RMA provided subsidies and allowances on behalf of more than 3,000 deceased policyholders in reinsurance years 2008 through 2012. We are therefore pleased to learn from USDA’s comment letter that RMA has begun implementing formal, systematic procedures to identify and prevent subsidies on behalf of deceased individuals consistent with our recommendation. According to the comment letter, effective May 1, 2013, RMA implemented a new computer matching procedure to check federal crop insurance program eligibility, subsidies, and payments to policyholders against the public version of the death master file, and, when the complete death master file is available, RMA is prepared to integrate it into the agency’s computer matching system. The steps RMA is taking are promising. We encourage NRCS to take similar steps, because until and unless NRCS has its payment data records matched against SSA’s complete death master file, it cannot know if it is providing payments to deceased individuals or whether these payments are proper or improper. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Agriculture, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our review were to determine the extent to which procedures are in place to prevent (1) the Farm Service Agency (FSA) from making potentially improper payments to deceased individuals, (2) the Natural Resources Conservation Service (NRCS) from making potentially improper payments to deceased individuals, and (3) the Risk Management Agency (RMA) from providing potentially improper subsidies on behalf of deceased individuals. To address all three objectives, we reviewed relevant sections of the Food, Conservation, and Energy Act of 2008 (2008 Farm Bill), the Improper Payments Information Act of 2002, the Improper Payments Elimination and Recovery Act of 2010, the Improper Payments Elimination and Recovery Improvement Act of 2012, and the Do Not Pay initiative. We also reviewed relevant studies prepared by the U.S. Department of Agriculture’s (USDA) Office of Inspector General, as well as our own past reports. To determine the extent to which FSA has procedures in place to prevent potentially improper payments to deceased individuals, we reviewed agency guidance, such as FSA’s Handbook on Common Management and Operating Provisions, 1-CM (Revision 3), as well as FSA’s reports compiling the results of its matches of payment data against the Social Security Administration’s (SSA) death master file. FSA refers to this report as the DMF Review Report. We also interviewed agency officials at FSA’s Washington, D.C., headquarters and at its Kansas City, Missouri, information technology office about agency procedures, guidance, payment processes, and the identification of improper payments. In addition, we interviewed FSA officials about the extent to which the agency follows requirements to compare agency data with the data in SSA’s death master file and the steps the agency takes to recover improper payments. To obtain information about agency procedures and adherence to guidance, we interviewed FSA’s state and county officials in California, Illinois, Kansas, Missouri, and county officials in Texas. We selected offices in Texas, Illinois, and Kansas because these states represent the highest numbers, respectively, of FSA payments made to deceased individuals. We selected the Missouri office because of the state’s large number of FSA payments made to deceased individuals and its geographic proximity to other states we visited, and we selected California for geographic diversity. To determine if state and county offices accurately coded payments made to deceased individuals as proper or improper, we analyzed a generalizable, random sample of payments made by FSA to deceased individuals from April 2011 through March 2012, and we reviewed and analyzed supporting documentation to determine the individuals’ eligibility. Specifically, we randomly selected 100 payments made during that 1-year period. The sample size was chosen to provide a margin of error for an attribute measure of no greater than plus or minus 10 percentage points at the 95 percent level of confidence. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (i.e., plus or minus 5 percentage points). This interval would contain the actual population value for 95 percent of the samples we could have drawn. We also randomly selected for case study analysis a nongeneralizable sample of 20 payments made to deceased individuals during the same time period and deemed erroneous by FSA. For the case study analysis, we reviewed and analyzed the supporting payment eligibility documentation of these improper payments. Because this analysis was a case study, the results cannot be generalized to all erroneous payments identified by FSA, but they provide examples of the kinds and types of improper payments made to deceased individuals. We assessed the reliability of FSA’s payment data by (1) reviewing existing information about the data and the system that produced them and (2) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of our review. To determine the extent to which NRCS has procedures in place to prevent potentially improper payments to deceased individuals, we interviewed agency officials at NRCS’s Washington, D.C., headquarters office and state and county offices in California, Illinois, Kansas, and Missouri. We selected these offices for geographic diversity and because of their close proximity to meetings we were holding with FSA officials. Using Social Security numbers, we compared the death master file with NRCS payment data for conservation programs from fiscal year 2008 through April 2012 to determine the number individuals paid by NRCS who died 1 year or more before the payment dates and the amount of these potentially improper payments. We assessed the reliability of SSA’s death master file by (1) performing electronic testing of required data elements and (2) reviewing relevant documentation. We determined that the data were sufficiently reliable for the purposes of our review. We included the following conservation programs in our analysis: the Agricultural Water Enhancement Program, Agricultural Management Assistance, Chesapeake Bay Watershed Initiative, Conservation Security Program, Conservation Stewardship Program, Environmental Quality Incentives Program, Farm and Ranch Lands Protection Program, Grassland Reserve Program, Healthy Forests Reserve Program, Wetlands Reserve Program, and Wildlife Habitat Incentives Program. We did not review a portion of payments made under these programs because NRCS was unable to provide us with the Social Security numbers for some program participants. We assessed the reliability of NRCS’s data by (1) performing electronic testing of required data elements and (2) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of our review. To determine the extent to which RMA has procedures in place to prevent improper subsidies to deceased policyholders, we interviewed RMA officials to obtain information about the steps the agency takes to verify its records of eligible policyholders. To determine the extent to which RMA’s records are regularly updated to verify whether policyholders have died, we compared the death master file with crop insurance data from RMA for reinsurance year 2008 through reinsurance year 2012 and produced a list of premium subsidies, administrative allowances, and claims payments provided on behalf of policyholders 2 or more reinsurance years after death. As mentioned earlier, we assessed the reliability of SSA’s death master file and determined that the data were sufficiently reliable for the purposes of our review. We assessed the reliability of RMA’s crop insurance data by (1) reviewing related documentation, (2) performing electronic testing of required data elements, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of our review. We conducted this performance audit from March 2012 through June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual above, Thomas M. Cook (Assistant Director), Carl S. Barden, Kevin S. Bray, Allen T. Chan, Jennifer Chanley, Ellen W. Chu, Mitchell B. Karpman, Michael Kendix, Karine E. McClosky, and Kiki Theodoropoulos made key contributions to this report. Social Security Administration: Preliminary Observations on the Death Master File. GAO-13-574T. Washington, D.C.: May 8, 2013. Farm Programs: Direct Payments Should Be Reconsidered. GAO-12-640. Washington, D.C.: July 3, 2012. Farm Bill: Issues to Consider for Reauthorization. GAO-12-338SP. Washington, D.C.: April 24, 2012. Crop Insurance: Savings Would Result from Program Changes and Greater Use of Data Mining. GAO-12-256. Washington, D.C.: March 13, 2012. Crop Insurance: Opportunities Exist to Reduce the Costs of Administering the Program. GAO-09-445. Washington, D.C.: April 29, 2009. Federal Farm Programs: USDA Needs to Strengthen Controls to Prevent Payments to Individuals Who Exceed Income Eligibility Limits. GAO-09-67. Washington, D.C.: October 24, 2008. Federal Farm Programs: USDA Needs to Strengthen Management Controls to Prevent Improper Payments to Estates and Deceased Individuals. GAO-07-1137T. Washington, D.C.: July 24, 2007. Federal Farm Programs: USDA Needs to Strengthen Controls to Prevent Improper Payments to Estates and Deceased Individuals. GAO-07-818. Washington, D.C.: July 9, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-944T. Washington, D.C.: June 7, 2007. Crop Insurance: More Needs to Be Done to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-06-878T. Washington, D.C.: June 15, 2006. Crop Insurance: Actions Needed to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse. GAO-05-528. Washington, D.C.: September 30, 2005.
|
USDA spends about $20 billion annually on federal programs that support farm income, conserve natural resources, and help farmers manage risks from natural disasters, benefiting over 1 million participants. Given their cost and continuing nationwide budget pressures, these programs have come under increasing scrutiny. One concern has been the distribution of benefits to ineligible participants, including potentially improper payments to deceased individuals, which, as GAO and others have reported, may call into question whether these farm safety net programs are benefiting the agricultural sector as intended. GAO was asked to evaluate USDA controls over payments to the deceased. This report examines the extent to which procedures are in place to prevent (1) FSA and (2) NRCS from making potentially improper payments to deceased individuals and (3) RMA from providing potentially improper subsidies on behalf of deceased individuals. GAO reviewed a random sample of payments, compared USDA's databases with SSA's master list of deceased individuals, and interviewed agency officials. Since 2007, the Department of Agriculture's (USDA) Farm Service Agency (FSA), which administers various programs for farmers that help support farm incomes and provide disaster assistance, has established procedures for preventing improper payments to deceased individuals, including, on a quarterly basis, matching payments to program participants with the Social Security Administration's (SSA) data on deceased individuals. In addition, FSA state and county offices review and verify whether payments made to deceased individuals are proper or improper. Overall, these procedures have enabled FSA to identify thousands of deceased individuals who were paid $3.3 million in improper payments after their dates of death, of which FSA has recovered approximately $1 million. GAO reviewed a generalizable random sample of payments to deceased individuals that FSA identified as proper and found that 9 percent did not have sufficient support to be coded as proper. More monitoring to ensure that county offices' coding of payments is supported by documentation could help reduce the error rate. The Natural Resources Conservation Service (NRCS), a USDA agency that administers voluntary conservation programs, does not have procedures to prevent potentially improper payments to deceased individuals. For example, NRCS's ability to verify whether payment recipients have died is limited because the agency does not match these recipients against SSA's master list of deceased individuals. Under the standards for internal control in the federal government, agencies are to clearly document such control in the form of management directives, administrative policies, or operating manuals. GAO did a data review for fiscal year 2008 to April 2012, and estimates that NRCS made $10.6 million payments on behalf of 1,103 deceased individuals 1 year or more after their death. Some of these payments may have been proper, but NRCS cannot be certain because it neither identifies which of its payments were made to deceased individuals, nor reviews each of these payments. USDA's Risk Management Agency (RMA), which administers crop insurance programs, does not have procedures in place consistent with federal internal control standards to prevent potentially improper subsidies on behalf of deceased individuals. For example, RMA does not use SSA's master list of deceased individuals to verify whether its policyholders have died. GAO matched every policyholder's Social Security number in RMA's crop insurance subsidy and administrative allowance data for crop insurance years 2008 to 2012 with SSA's master list of deceased individuals and found that $22 million in subsidies and allowances may have been provided on behalf of an estimated 3,434 program policyholders 2 or more years after death. Many of these subsidies and allowances may have been proper, but without reviewing each subsidy and allowance made on behalf of deceased individuals, RMA cannot be certain that these subsidies and allowances are proper. In addition, without accurate records of which policyholders are deceased, RMA may be less likely to rely on results from data mining--a technique for extracting knowledge from large volumes of data--and therefore be less likely to detect fraudulent, wasteful, or abusive crop insurance claims. GAO recommends that FSA further strengthen its verification of payments to deceased individuals, NRCS develop and implement procedures to prevent improper payments to deceased individuals, and RMA develop and implement procedures to prevent improper crop insurance subsidies on behalf of deceased policyholders and to improve the effectiveness of its data mining. USDA generally agreed with GAO's findings and recommendations.
|
As of July 2014, about 60,000 community retail pharmacies in the United States dispensed prescription drugs of which approximately 66 percent were chain retail pharmacies and the remaining 34 percent were independent pharmacies, according to an industry study. In 2015, retail pharmacies dispensed about 4 billion prescriptions, while mail order pharmacies dispensed over 200 million prescriptions, according to one study. Pharmacies’ prescription drug container labeling practices may be affected by several types of entities: PBMs that help many third-party payers—such as health plans— manage their prescription drug benefits by operating mail order pharmacies, assembling retail pharmacy networks that include both chain and independent pharmacies, and providing other services. PBMs issue corporate policies that govern their mail order pharmacy operations and enter into contracts with retail pharmacies in their networks that set forth the terms and conditions for dispensing prescriptions to health plan enrollees. Chain pharmacy companies that operate chain retail pharmacies with four or more locations. These companies issue corporate policies that govern their retail pharmacy operations. PSAOs that provide a broad range of administrative services to networks of retail pharmacies, including contract negotiation with third-party payers. To establish these networks, PSAOs enter into contracts with retail pharmacies—generally independent pharmacies—that set forth the duties and obligations of the PSAO and each pharmacy. State pharmacy regulating bodies that oversee the practice of pharmacy through activities such as licensing pharmacies and issuing regulations. According to the National Association of Boards of Pharmacy, which represents state boards of pharmacy, as of February 2016, only one state—Massachusetts—requires pharmacies to provide large-print labels to individuals who are visually impaired and elderly upon request. Pharmacy accreditation organizations that certify pharmacies meet a predetermined set of standards for pharmacy care or functions, which may include elements for providing services to individuals who are blind or visually impaired. Other entities may also develop or disseminate guidance on prescription drug container labels that may affect pharmacies’ labeling practices. For example, standard-setting organizations may develop prescription drug container labeling standards and entities, such as state pharmacy regulating bodies, can incorporate these standards into their pharmacy labeling requirements. Industry groups representing pharmacies or pharmacists and advocacy groups for individuals who are blind or visually impaired also may develop guidance, including prescription drug container labeling guidance, or use tools, such as newsletters or website postings, to disseminate guidance or other information to their members. Accessible labels can make information on prescription drug container labels more easily available to individuals who are blind or visually impaired. Pharmacies can purchase hardware and software from private technology vendors to produce labels in audible, braille, and large print formats. Audible labels allow individuals to hear prescription drug container label information. Technologies for audible labels include talking pill bottles that allow pharmacists to create a voice or digital recording of label information and tags that can be encoded with label information, affixed to prescription drug containers, and are read out by a separate device. Braille labels allow individuals who are blind or visually impaired to read prescription drug container label information by touch, and large print labels enhance the size of label text for easier viewing. Pharmacists can produce hard copy braille or large print labels and affix them to the prescription drug container. See figure 1 for examples of accessible labels. In 2012, the U.S. Access Board convened an 18-member working group to develop best practices to make prescription drug container label information accessible to individuals who are blind or visually impaired. This working group included representatives from mail order pharmacies; chain pharmacy companies; advocacy groups for individuals who are blind or visually impaired; and industry groups representing pharmacies and pharmacists. The working group’s July 2013 report identified 34 best practices. These best practices offer guidance to pharmacists on how to deliver and provide accessible labels and their adoption is voluntary. The best practices include those that promote access to prescription drug container label information in all accessible labels formats as well as those specific to audible, braille, and large print formats. For example, one best practice that applies to all accessible label formats is for pharmacies to not impose an extra fee to individuals to cover the cost of providing accessible labels or equipment dedicated for prescription drug container label access. The mail order pharmacies operated by the 4 PBMs, some retail pharmacies operated by the 9 chain pharmacy companies, and some of the 18 individual chain and independent retail pharmacy locations that we contacted for this review said they can provide accessible labels as of March 31, 2016. For example, officials from the 4 PBMs reported their mail order pharmacies generally can provide accessible labels, including audible, braille, and large print labels. Similarly, officials from 6 of the 9 nine chain pharmacy companies reported their retail pharmacies can provide accessible labels. Additionally, officials from 8 of the 18 randomly selected individual chain and independent retail pharmacy locations reported they can provide accessible labels. Of the 8 individual retail pharmacy locations that reported that they can provide accessible labels, officials from more chain pharmacies—7 pharmacies—reported being able to provide accessible labels than independent pharmacies—1 pharmacy. Furthermore, officials from the PBMs more often reported that their mail order pharmacies can provide audible and braille labels, while officials from the chain pharmacy companies and individual retail pharmacy locations more often reported that their retail pharmacies can provide audible labels. (See table 1.) The four PBMs that can provide accessible labels through their mail order pharmacies dispensed prescriptions with these labels from a central location and delivered them directly to customers. These PBMs used the same technologies to provide audible and braille labels, but differed in how they can provide large print labels through their mail order pharmacies. See table 2 for more information on how these PBMs can provide accessible labels through their mail order pharmacies. The six chain pharmacy companies that can provide accessible labels through their retail pharmacies varied in terms of the accessible label formats they can provide, the number of retail locations that can provide them, and timeframes for providing prescriptions with these labels. For example, officials from one chain pharmacy company reported to us their retail locations can provide accessible labels in all formats, while others reported to us their retail locations can provide accessible labels in one or two formats. Also, officials from five companies reported to us that they can provide accessible labels in all retail locations, while officials from one company reported they can provide accessible labels in one retail location. Further, some of these companies can provide prescriptions with accessible labels available with same-day pickup, while others delivered them directly to customers. Officials from the three chain pharmacy companies that cannot provide accessible labels reported that they can make other accommodations, such as providing information on a separate piece of paper in large print. See table 3 for more information on how selected chain pharmacy companies can provide accessible labels. Officials from the four PBMs and three of the six chain pharmacy companies that can provide accessible labels through their pharmacies reported that the percent of prescriptions dispensed with such labels was generally low—less than 1 percent. For example, officials from one PBM stated their mail order pharmacy dispensed an average of about 21,000 prescriptions with accessible labels out of about 11.5 million total prescriptions dispensed each month during the first quarter of calendar year 2016. Officials from another PBM stated that they dispensed about 1,200 prescriptions with accessible labels out of about 3 million total prescriptions dispensed each month during the first quarter of 2016. Similarly, officials from one chain pharmacy company stated that their retail pharmacy locations dispensed an average of about 240 prescriptions with accessible labels out of about 6.5 million total prescriptions dispensed each month during the first quarter of 2016. Officials from the three remaining chain pharmacy companies could not provide us with the percent of prescriptions dispensed with accessible labels. However, officials from one of these companies stated that one of their retail locations dispensed prescriptions with accessible labels to 6 to 10 individuals who are blind or visually impaired each month and dispensed between 3,200 and 5,600 total prescriptions each month during the first quarter of 2016. Officials from the four PBMs, six chain pharmacy companies, and eight individual retail pharmacy locations that we contacted and that can provide accessible labels reported that their mail order and retail pharmacies have generally implemented most of the 34 best practices for these labels. Of these 34 best practices, 14 apply to all accessible labels (henceforth referred to as all-format best practices), 3 apply to audible labels, 7 apply to braille labels, and 10 apply to large print labels. Officials from the four PBMs, four of the six chain pharmacy companies, and eight individual retail pharmacy locations generally reported that their mail order and retail pharmacies have implemented most of the 14 all- format best practices for accessible labels. These all-format best practices include specific recommendations to promote access to prescription drug container label information in all available formats— including audible, braille, and large print labels—and include practices such as pharmacists encouraging patients and their representatives to communicate their needs to the pharmacist. All selected PBMs, chain pharmacy companies, and individual retail pharmacy locations that provide accessible labels implemented practices such as making prescription drug container labels available in various accessible formats, as well as using the same quality control processes for prescription drug container labels in accessible formats as print prescription drug container labels. See table 4 for further detail on all-format best practices implemented in pharmacies by selected PBMs, chain pharmacy companies, and individual retail pharmacy locations. Officials from the four PBMs, five of the six chain pharmacy companies, and eight individual retail pharmacy locations told us that their mail order and retail pharmacies implemented most of the applicable format-specific best practices for audible, braille, and large print labels. These format- specific best practices include specific recommendations on how to provide these labels and some of these practices only apply under certain circumstances. For example, six of the seven format-specific best practices for braille prescription drug container labels only apply to hard copy braille labels. The most commonly implemented applicable format- specific best practices across the PBMs, chain pharmacy companies, and individual retail pharmacy locations included speaking in a clear voice when recording an audible label, using transparent materials when embossing braille labels, and printing text in the highest possible contrast for large print labels. See tables 5 through 7 for further detail on the audible, braille, and large print format-specific best practices implemented by PBMs, chain pharmacy companies, and individual retail pharmacy locations. Stakeholders we contacted most often identified three key barriers that individuals who are blind or visually impaired continue to face in accessing prescription drug container label information even after the publication of the best practices in 2013. Some of these stakeholders told us that the best practices have reduced some barriers to accessing prescription drug container label information for individuals who are blind or visually impaired by increasing pharmacies’ awareness of these barriers or encouraging more pharmacies to provide accessible labels. However, other stakeholders told us that the types of barriers that individuals who are blind or visually impaired face have not changed. Inability to identify medications independently. Stakeholders told us that individuals who are blind or visually impaired continue to face barriers identifying medications independently. Without accessible labels, individuals who are blind or visually impaired will need to rely on a pharmacist or caregiver to help them identify medications. For example, some stakeholders said that pharmacists may offer medication counseling, such as allowing individuals who are blind or visually impaired to feel the size, shape, and weight of their medication and answering questions about dosage or side effects. Pharmacists may also place rubber bands on some prescription drug containers or use differently sized containers to help individuals who are blind or visually impaired identify different medications by their containers. However, according to some stakeholders, these alternative methods may not be reliable; for example, a rubber band may be removed from a prescription drug container or caregivers may not understand medication directions. Further, stakeholders stated that if accessible labels are not securely affixed to the prescription drug containers, then they can fall off and get mixed up, which could increase individuals’ risk for medication errors. Inability to identify pharmacies that can provide accessible labels. Stakeholders told us that individuals who are blind or visually impaired generally do not know which pharmacies can provide accessible labels. Many stakeholders stated that the inability to identify pharmacies that can provide these labels stems from limited or no efforts to advertise accessible labels in pharmacies and no centralized database that provides information on pharmacies that can provide these labels. While officials from the four selected PBMs reported taking steps to inform individuals who are blind or visually impaired about the accessible labels that their mail order pharmacies can provide—such as two PBMs reporting training customer service representatives to ask specific questions to identify individuals who could benefit from prescriptions with accessible labels and help them identify the accessible label that would best fit their needs—other selected stakeholders that operate pharmacies told us that they do not advertise the accessible labels their pharmacies can provide. Specifically, officials from 2 of our 9 selected chain pharmacy companies and 4 of 18 individual retail pharmacy locations that submitted questionnaire responses reported to us that they generally do not advertise the accessible labels their pharmacy can provide, or that customers need to ask pharmacists about these labels in order to have them included on the prescription containers. Officials from an advocacy group also reported that individuals who are blind or visually impaired continue to be unable to identify pharmacies that can provide accessible labels because there is no centralized database that provides information on which pharmacies can provide such labels. Officials from the two technology vendors told us that they compiled information on retail pharmacies that can provide accessible labels sold by their companies; however, their databases are limited to locations that can provide their specific products and do not include retail pharmacy locations that can provide accessible labels made by other technology vendors. Inability to obtain prescriptions with accessible labels on the same day as requested. Stakeholders told us that individuals who are blind or visually impaired may be unable to obtain prescriptions with accessible labels on the same day as requested. For example, officials from two chain pharmacy companies stated that individuals who are blind or visually impaired can work with retail pharmacy staff to order prescriptions with accessible labels through mail order pharmacies and have these accessible prescriptions sent directly to these individuals at a later date. Further, officials from these chain pharmacy companies reported that it may take up to 72 hours from the time an individual requests a prescription with an accessible label to the time the individual receives that prescription. Officials from one advocacy group raised concerns that this delay in obtaining prescriptions through the mail order pharmacy is unreasonable for certain time-sensitive prescriptions that must be dispensed immediately, such as antibiotics to treat an infection. Stakeholders most often identified four key challenges that pharmacies had in providing accessible labels or implementing the best practices and identified steps that could address some of these challenges. Lack of awareness of the best practices. Stakeholders identified lack of awareness of the best practices by pharmacies and others as a key challenge: Pharmacies (including pharmacists and pharmacy staff). Federal agencies, advocacy groups, technology vendors, and an accreditation organization told us that pharmacies were not aware of the best practices. Further, officials from 7 of 18 individual retail pharmacy locations stated that they first learned about the best practices when we contacted them. Additionally, some stakeholders told us that individuals who are blind or visually impaired are generally unaware of the best practices and, as a result, may not request accessible labels at their pharmacies. Other stakeholders. Other stakeholders that could affect pharmacies’ labeling practices or provide medical services to individuals who are blind or visually impaired were unaware of the best practices. For example, the four states and an industry group representing physicians told us that they were unaware of the best practices prior to our contact with them. After our outreach, one state published an article about the best practices in its newsletter and discussed these practices with pharmacists, pharmacy staff, and the public at two public meetings in May and July 2016. Of those stakeholders who identified this challenge, many stated that greater dissemination of information on the best practices could increase awareness of the best practices. Additionally, NCD officials told us that they would continue to disseminate information on the best practices as long as stakeholders remained unaware of them. Low demand and high costs for providing accessible labels. Another challenge that stakeholders identified is that pharmacies had low demand and incurred high costs to provide accessible labels. Officials from five chain pharmacy companies and four individual retail pharmacy locations told us that they have had relatively few or no customer requests for accessible labels. Some stakeholders reported that the demand for these labels does not justify the costs to provide accessible labels. These costs include staff costs—such as training or the time needed to produce these labels—as well as the costs associated with the technology required to produce the labels—such as purchasing software, printers, or labels. Two stakeholders told us that the initial costs to purchase this technology may range from a few hundred to a few thousand dollars for each individual retail pharmacy location. Further, these pharmacy locations may incur ongoing costs, such as annual fees of up to a few hundred dollars to cover technical assistance and other services or fees of up to a few dollars to purchase additional accessible labels. Additionally, many stakeholders stated that it may be costly for larger chain pharmacy companies to implement technology and train staff in many locations, while smaller independent pharmacies may have difficulty absorbing the costs of purchasing the new technology they need to produce accessible labels. Of those stakeholders who identified this challenge, some stated that financial support for pharmacies, such as third-party reimbursement, could address high costs that pharmacies incur to provide accessible labels that meet the best practices. These stakeholders stated that there is currently no direct financial support for providing these labels and these labels are provided free of charge to customers. Officials from four chain pharmacy companies told us that pharmacies may be willing to provide accessible labels that meet the best practices if third parties, such as health plans, were willing to reimburse or share in the costs of producing these labels. Additionally, officials from one industry group representing pharmacists stated that pharmacies may be more willing to provide accessible labels that meet the best practices if grant money were available to cover costs for producing these labels. Technical challenges for providing accessible labels. Stakeholders identified some technical challenges for providing accessible labels that meet the best practices. For example, officials from one state and four chain pharmacy companies told us that pharmacies face challenges fitting all the required prescription label information in large print formats on small prescription drug containers. Officials from one technology vendor stated that printing the large print labels in a booklet form, which can then be affixed to the prescription drug container, could address this challenge. Additionally, officials from a chain pharmacy company, a state regulating body, and a federal agency told us that pharmacists typically cannot independently verify information on braille labels to ensure their accuracy. Specifically, three stakeholders expressed concern that pharmacists who cannot read braille cannot determine if the braille translation is accurate and therefore must rely on the accuracy of the braille technology to translate prescription label information to braille. Absence of requirements to implement the best practices. Stakeholders told us that some pharmacies are not implementing the best practices, given an absence of requirements to do so by applicable corporate policies, contracts, state regulations, or accreditation standards. Corporate pharmacy policies. Officials from all four PBMs and four of the nine chain pharmacy companies told us that they incorporated some, but not all, of the best practices into their corporate policies that pharmacies must follow. However, officials from three chain pharmacy companies told us that their corporate policies do not include any of the best practices and their retail pharmacies cannot offer any services for individuals who are blind or visually impaired other than what has been approved at the corporate level. Contracts with retail pharmacies. Officials from all four PBMs and all three PSAOs told us that their contracts with retail pharmacies in their networks do not require pharmacies to implement the best practices. Pharmacy accreditation standards. Officials from two accreditation organizations told us that their pharmacy standards do not incorporate the best practices. Pharmacies must comply with standards for the accreditation processes they choose to undergo. Two accreditation organizations reported that they have standards that address services for individuals with disabilities, but these standards are not specific to drug labeling for the visually impaired and do not incorporate the best practices. State regulations. Officials from all four states told us that their state’s regulations do not incorporate the best practices. They also stated that they did not have any plans to update their current regulations to incorporate the best practices; however, officials from one state told us that they may consider doing so in the future. Massachusetts does have a law requiring the provision of large print labels to the visually impaired and elderly upon request, but the font size requirement differs from that of the best practices. Of those stakeholders who identified this challenge, most stakeholders told us that more pharmacies may implement the best practices if corporate pharmacy policies or pharmacy accreditation standards incorporated them. For example, officials from three chain pharmacy companies, one advocacy group, one industry group, and one technology vendor told us that pharmacies could implement the best practices if corporate pharmacy policies included them. Further, officials from two individual retail pharmacy locations stated that they require corporate approval to implement any technologies to produce accessible labels that meet the best practices. Additionally, officials from one PBM and one technology vendor told us that more pharmacies would implement the best practices if pharmacy accreditation standards incorporated them. We found that NCD conducted limited campaign activities from July 2013 through August 2016 to inform and educate pharmacies (including pharmacists and pharmacy staff), individuals who are blind or visually impaired, and the public about the best practices. For example, prior to the publication of the U.S. Access Board working group’s report, NCD sent emails to members of the working group to solicit ideas on how the agency could coordinate with working group members to disseminate information on the best practices once they were published. From July 2013 through February 2016, NCD issued an agency statement and two press releases through its website, listserv, and online social media about the best practices and pharmacies’ agreements with advocacy groups to provide accessible labels; hosted a conference call with three advocacy groups to discuss how they could conduct outreach as part of NCD’s campaign; and published a blog post on accessible labels. However, the agency did not conduct any campaign activities in 2015. From June through August 2016, NCD developed a brochure on some of the best practices, disseminated the brochure through its website, and coordinated with the U.S. Access Board, one industry group representing pharmacists, and one chain pharmacy company to disseminate this brochure. See table 8 for a timeline of NCD’s campaign activities. Most of the selected stakeholders we spoke with—including PBMs, chain pharmacy companies, states, and advocacy groups—have not had any communication with NCD about its campaign, and, as previously discussed, some were unaware of the best practices. When we first interviewed NCD officials in February 2016, they could not provide us with a fully developed and documented plan for conducting and evaluating the agency’s campaign nor did they do so in subsequent follow-up we had with them through August 2016. However, in September 2016 during a meeting to review NCD’s campaign activities, officials told us they had developed a plan in December 2013 for conducting campaign activities that were to occur throughout 2014. These activities consisted of developing a virtual toolkit for stakeholders to use for planning their own outreach, according to documentation NCD provided. However, we determined that NDC did not conduct most of these activities. Subsequent to our September 2016 meeting, officials provided us with a corrective action plan with timeframes for conducting future campaign activities through fiscal year 2017, including some of the activities that NCD did not conduct in 2014. The development of this corrective action plan is a positive step to conduct campaign activities. However, neither the original plan nor the corrective action plan assigned responsibilities for campaign activities. This is inconsistent with federal internal control standards, which indicate that an agency should assign responsibilities to achieve its objectives. Given that most of the activities NDC originally planned for 2014 never occurred, this lack of specificity regarding responsibilities is concerning because it does not provide assurance that the agency will conduct future campaign activities as planned. Further, officials could not provide us any plans for how they will evaluate the agency’s campaign activities. NCD officials stated that the agency has not evaluated nor has any plans to evaluate its campaign activities, other than tracking the number of likes or retweets on its social media posts. Federal internal control standards indicate that an agency should design and execute a plan to evaluate its activities, document evaluation results, and identify corrective actions to address identified deficiencies. In the absence of a formal evaluation plan, NCD officials will be unable to determine the effectiveness of their campaign activities and make adjustments, as needed. The U.S. Access Board published best practices to make information on prescription drug container labels accessible to the about 7.4 million Americans who are blind or visually impaired. However, there continues to be a lack of awareness among a variety of stakeholders that these best practices exist. NCD, the agency charged with conducting a campaign to inform and educate stakeholders of these practices, has not had an effective plan to conduct its campaign and, consequently, conducted limited activities from July 2013 through August 2016. For example, the agency did not conduct most of its planned campaign activities in 2014 and conducted no activities in 2015. Although NCD now has a corrective action plan for activities it intends to conduct through fiscal year 2017, it has not assigned responsibilities for these activities and has not developed an evaluation strategy for its activities, which is inconsistent with federal internal control standards. Without ensuring these elements are in place, NCD will be unable to adjust its corrective action plan and assess whether the information it is providing on the best practices are effectively reaching its target audience. The Executive Director of NCD should assign responsibilities for conducting future campaign activities and develop an evaluation plan for its activities. We provided a draft of this report to the U.S. Access Board and NCD for comment. Both agencies provided written comments, which we have reprinted in appendixes II and III, respectively. The U.S. Access Board said that it found our report to be complete and accurate. In its written comments, NCD did not specifically state whether it agreed with our recommendation, but signaled its intention to revise its corrective action plan for conducting campaign activities through fiscal year 2017. NCD stated that it has reassessed its plan and is taking action to ensure ongoing compliance with federal internal control standards. NCD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Executive Director of the U.S. Access Board, the Executive Director of the National Council on Disability, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We developed a web-based questionnaire that included questions on 1. the extent to which pharmacies can provide accessible labels and have implemented the specific U.S. Access Board’s best practices for making information on prescription drug container labels accessible to individuals who are blind or visually impaired (henceforth referred to as best practices); 2. barriers that individuals who are blind or visually impaired face in accessing information on prescription drug container labels; and 3. factors that affect pharmacies’ implementation of the best practices and steps that could address any implementation challenges. We sent this questionnaire to pharmacy benefit managers (PBM) that operate mail order pharmacies that centrally fill prescriptions and send them directly to individuals; chain pharmacy companies that operate retail pharmacies in traditional pharmacies locations, supermarkets, and mass merchandise stores; and individual retail pharmacy locations—including chain pharmacies (those with four or more locations under common ownership) and independent pharmacies (those with three or fewer retail locations under common ownership): Four PBMs that manage prescription drug benefits for the four largest private insurers that sponsor Medicare Part D plans as of March 2016. To select PBMs, we analyzed Medicare Part D contract and enrollment data as of March 2016 from the Centers for Medicare & Medicaid Services, which were the most recent available data at the time began our work. Using these data, we identified the four private insurers that sponsor Medicare Part D plans with the largest percentage of Medicare Part D enrollment as of March 2016— representing a total of about 60 percent of Medicare Part D enrollees—and selected the four PBMs that managed the prescription drug benefits for these private insurers. Nine of the 10 largest chain pharmacy companies as of March 2016. To select these companies, we obtained data from the National Council of Prescription Drug Programs on the 16 chain pharmacy companies with the most retail pharmacy locations as of March 2016, which were the most recent available data at the time we began our data collection. We compared this list of pharmacies to data from the National Association of Chain Drug Stores on their members as of March 2016 and reconciled any differences between these two lists of data. We contacted the 10 largest chain pharmacy companies based on their number of retail pharmacy locations, which ranged from about 480 to over 9,700, and 9 agreed to participate in our study. Thirty-eight individual retail pharmacy locations that included both chain and independent pharmacies in metropolitan and non- metropolitan areas in the four states for which we interviewed the state pharmacy regulating bodies—California, Florida, Illinois, and Massachusetts. To make our selection, we obtained data as of May and June 2016 on the active licensed retail pharmacies in each of the four states—including pharmacy name, the county in which the pharmacies were located, and pharmacy license number. Then, using the U.S. Department of Agriculture’s 2013 Rural-Urban Continuum Codes data, which classifies counties by their population size and degree of urbanization, we coded pharmacies by the county in which they were located. We used the coded data to create two randomized lists—one for pharmacies in metropolitan counties and a second for pharmacies in non-metropolitan counties—to use for selection. Using these lists, we targeted two independent and two chain pharmacies from metropolitan counties, since most pharmacies were located in metropolitan areas, and one independent and one chain from non-metropolitan counties. During the development of our questionnaire, we pretested it with two randomly selected individual retail pharmacy locations (one chain and one independent pharmacy) to ensure that our questions and response choices were clear, appropriate, and answerable. We then made changes to the content of the questionnaire based on feedback obtained from the pretests. We administered the web-based questionnaire from July 2016 through September 1, 2016 and received responses from the 4 selected PBMs, 7 of the 9 selected chain pharmacy companies, and 18 of 38 randomly selected individual retail pharmacy locations. The 18 individual retail pharmacy locations represented 10 chain and 8 independent pharmacies in both metropolitan and non-metropolitan areas in all four of our selected states. In addition to the contact named above, individuals making key contributions to this report include Rashmi Agarwal, Assistant Director; Kristin Ekelund, Analyst-in-Charge; Melissa Duong; and John Lalomio. Also contributing were George Bogart, Carolyn Fitzgerald, Laurie Pachter, and Vikki Porter.
|
About 7.4 million Americans are blind or visually impaired and may face difficulty reading prescription drug container labels. FDASIA required the U.S. Access Board to develop best practices for accessible labels and NCD to conduct an informational campaign on these best practices. FDASIA also included a provision for GAO to review pharmacies' implementation of these best practices. This report examines: the extent to which pharmacies can and do provide accessible labels and implement the best practices; pharmacy challenges; and the extent to which NCD conducted its informational campaign, among others. GAO collected information from 55 stakeholders, including 4 PBMs used by large insurers; 9 of the largest chain pharmacy companies; 18 randomly selected individual retail pharmacy locations in 4 states with varying levels of visually impaired residents; and 24 others, such as state regulating bodies and advocacy and industry groups. GAO sent a web-based questionnaire to PBMs, chain pharmacy companies, and individual retail pharmacy locations. GAO also interviewed stakeholders and reviewed state regulations and documents from NCD. GAO found that some pharmacies can provide accessible prescription drug labels, which include labels in audible, braille, and large print formats and are affixed to prescription drug containers. Mail order pharmacies: Four pharmacy benefit managers (PBMs) used by large insurers that GAO contacted reported that they can provide accessible labels through their mail order pharmacies. Retail pharmacies: Six of the 9 largest chain pharmacy companies and 8 of the 18 selected individual retail pharmacy locations GAO contacted also reported that they can provide accessible labels through their store-based retail pharmacies. The percent of prescriptions dispensed with accessible labels was generally low—less than one percent of all prescriptions dispensed—according to some PBMs and chain pharmacy companies that GAO contacted. With regard to best practices, a working group convened by the U.S. Access Board—a federal agency that promotes accessibility for individuals with disabilities—developed and published 34 best practices for accessible labels. Four PBMs, six chain pharmacy companies, and eight individual retail pharmacy locations GAO contacted reported that they have generally implemented most of the 34 best practices for accessible labels. However, stakeholders GAO contacted said that individuals who are blind or visually impaired continue to face barriers accessing drug label information, including identifying pharmacies that can provide accessible labels. Stakeholders GAO contacted identified four key challenges that pharmacies faced in providing accessible labels or implementing the best practices: (1) lack of awareness of the best practices; (2) low demand and high costs for providing accessible labels; (3) technical challenges for providing these labels; and (4) an absence of requirements to implement the best practices. Many stakeholders identified greater dissemination of the best practices as a step, among others, that could help address some of these challenges. The National Council on Disability (NCD)—the federal agency responsible for conducting an informational campaign on the best practices, as required by the Food and Drug Administration Safety and Innovation Act (FDASIA)—has conducted limited campaign activities. Primarily in 2013 and 2014, NCD used its website and social media to disseminate an agency statement and press releases on the best practices. However, most stakeholders GAO spoke with said they had no communication with NCD about its campaign, and some said they were unaware of the best practices. Agency officials provided GAO with an original plan for conducting campaign activities through 2014, but most activities were not conducted. During the course of our review, NCD developed a corrective action plan for conducting future campaign activities. However, neither plan assigned responsibilities for conducting these activities nor does the agency have plans to evaluate them, which is inconsistent with federal internal control standards. Without assigning responsibilities and developing an evaluation plan, NCD will be unable to adjust its action plan and assess whether the information on the best practices is effectively reaching its target audience. NCD should assign responsibility for conducting campaign activities and evaluate these activities. NCD neither agreed nor disagreed with the recommendation but indicated that it is taking steps to address this issue.
|
The United States has for many years funded various USIA broadcasting, educational, and visitor programs in the former Soviet Union to promote democratic ideas. Beginning in the mid-1980s, NED, a U.S.-funded nongovernmental organization, provided small grants to dissident groups throughout the former Soviet Union and funds for journals, videos, and other materials that were distributed in Russia and elsewhere. In 1990, NED began funding political organizing and trade union development work by three of its core institutes. In fiscal years 1990 and 1991, NED, in part through these core institutes, spent about $3 million for activities in or directed toward Russia. Democratic development assistance to Russia increased during fiscal year 1992, after the Soviet Union dissolved. From fiscal years 1992 to 1994, the U.S. government, excluding USIA, provided over $64.2 million in democratic development assistance to Russia, of which $57.3 million was provided by USAID, $5.8 million by NED, and $1.1 million by the State Department for a DOD program. USIA was unable to provide specific funding information for its activities in Russia because they were funded under a regional project. Appendix I provides detailed information about U.S. democratic development assistance to Russia from 1990 to 1994. The democracy assistance program in Russia seeks to capitalize on the historic opportunity to build democracy in place of a centralized Communist system. The U.S. program is meant not only to demonstrate U.S. political support for democratic reform in Russia but also to help create and nurture the full range of democratic institutions, processes, and values. U.S. efforts seek to increase the responsiveness and effectiveness of the Russian government, as well as the ability of Russian citizens to influence decisions affecting their lives. Toward that end, U.S. assistance provides support to independent media, democratic trade unions, reformist political parties, and other nongovernmental organizations. It also supports the Russian government’s efforts to enhance election administration and election laws, strengthen the courts and other legal institutions, promote civilian control of the military, and improve the quality of public administration. The U.S.-funded independent media program in Russia has helped raise the quality of print and broadcast journalism and contributed to Russia’s movement toward an independent, self-sustaining local television network. USAID’s Internews project, USIA’s grant to the Russian-American Press Information Center (RAPIC), and a number of small grants awarded to Russian nongovernmental media organizations by NED and the Eurasia Foundation have strengthened independent media by donating equipment and broadcast materials to hundreds of local television stations, teaching reporting skills to print and broadcast journalists, and providing training in business and marketing to media managers. According to the State Department, the growth of independent media in Russia began in 1990 during the Soviet era with the official abolition of press censorship. The new openness created a conducive environment for independent news reporting, as print and broadcast media, both still largely state-owned at the time, frequently aired views highly critical of the Communist government. Currently, print and broadcast media in Russia represent a wide range of opinions. Most operate unhindered from the Russian government and many are privately owned. Russian and U.S. officials said that the principal threat to media independence in Russia today is the weak economy. For many media organizations, advertising revenues are insufficient for continued survival, forcing them into bankruptcy or joining larger affiliates, thereby curtailing their independence and capacity to produce their own programs. According to U.S. and other observers, many print and broadcast outlets also face pressure from local political authorities or from organized crime, in large part due to their dire financial situations. Internews Network has developed an active working relationship with 200 of the approximately 500 over-the-air broadcasters that currently operate in the countries of the former Soviet Union, the majority in Russia. The technical assistance, training, and programming that Internews provided enabled some local stations to become commercially viable, according to U.S. officials and Russian participants. These officials and participants also said that Internews has helped many stations that have not achieved full commercial viability by providing enough support to forestall bankruptcy, signing into sponsorship arrangements, or becoming affiliates of larger networks. (See app. II.) The USAID-funded election administration project, implemented by the International Foundation for Electoral Systems (IFES), has made important contributions to addressing the legal, institutional, and procedural shortcomings evident during Russia’s December 1993 national elections. For example, it assisted in the development of Russia’s Voting Rights Act—which was enacted into law during November 1994—and other legislation governing elections for the State Duma (the lower house of the Russian Parliament). Russia now has a permanent and more independent election commission, voting rights, and Duma election procedures that are based in law. This improved the situation prevalent in December 1993, when national elections were held by presidential decree, the Central Election Commission (CEC) chairman was appointed by the President, and the electoral process and administrative apparatus were holdovers from the Communist era. IFES has also worked with the CEC to develop electoral training and voter education programs to help ensure that electoral procedures are properly carried out and to increase the public’s knowledge and participation in elections. Nonetheless, IFES officials believe that more progress can be made in electoral reform; for example, legislation governing elections for the upper chamber of passed. Also, newly passed laws and procedures had yet to be applied and tested to ensure that shortcomings of the December 1993 elections, such as lack of ballot security and inadequate transparency of vote counting and election results, would not be repeated. (See app. III.) Trade union development assistance in Russia, implemented through USAID and NED grants to the Free Trade Union Institute (FTUI), has helped increase the size and effectiveness of democratic trade unions. Using NED funds, FTUI provided important equipment and training for the first independent, non-Communist unions that arose in the late 1980s, that backed Boris Yeltsin and other reformers, and that played a key role in the breakup of the Soviet Union. Since then, FTUI’s support for democratic unions, funded by USAID and NED, has helped increase the quality of Russian unions through an extensive education program. It has also assisted in forming regional and national union confederations and has helped increase the public’s and government’s knowledge of worker and union issues. In addition, by using funds, first from a NED grant, and then a USAID rule of law contract, FTUI has financially supported efforts to address worker’ rights issues through Russia’s court system. Although FTUI has helped form or strengthen new democratic unions, it has been hampered by the continued influence of the successors to the official Communist trade unions, the inexperience and isolation of democratic unions, the apathy of Russian citizens, and the weakness of the economy. During the Soviet era, the Communist trade unions were inseparable from the party and state apparatus. According to U.S. and Russian officials, these old unions remain the largest in Russia, retain many of their assets from the Soviet era, and are therefore less dependent than the democratic unions on collecting dues. They also still exert control over many workers through their continued ability to dispense social welfare benefits in some locations. FTUI has directly supported the largest of the independent labor unions, including Sotsprof (about 300,000 members), the Confederation of Maritime Workers, and the Independent Miners’ Union of Russia (about 90,000 members each). Some Russian union leaders we met emphasized that the new independent unions give workers a voice, providing them an alternative to reactionary or nationalist political groups as the difficult economic situation in Russia continues. (See app. IV.) U.S.-funded political party development programs in Russia, implemented through NED and USAID grants to the National Democratic Institute (NDI) and the International Republican Institute (IRI), have not significantly strengthened reformist national political parties, either organizationally or in terms of increased membership or performance in elections. From 1990 through 1992, NDI and IRI used about $956,000 in NED funds to help the anti-Communist Democratic Russia Movement establish a printing facility and disseminate literature. They also conducted civic education and grassroots organizing programs for Russians at the national and local level. Since 1992, USAID has awarded NDI and IRI a series of grants with a combined value of $17.4 million to conduct programs in Russia through 1997. USAID documents state that the overall purpose of these grants is to assist reformist political parties strengthen their organizations and their role in elections, Parliament, and local government. NDI and IRI have developed relationships with many party officials and provided extensive training and assistance. However, because of the inhospitable environment in Russia for political party development, the institutes have had only minimal success in helping to strengthen reformist national political parties, either in their organization or in their election performance. Reformist parties—as demonstrated by their showing in the December 1993 and 1995 elections and by their difficulties in local elections—have been unwilling or unable to form coalitions, build national organizations, or convince large segments of the Russian public to support their political message. In the spring of 1995, USAID, anticipating the poor showing by reformist parties in the December 1995 parliamentary election and additional problems for reformists in the June 1996 presidential election, counseled NDI and IRI to direct more of their resources to working with grassroots nongovernmental organizations. (See app. V.) U.S.-funded rule of law activities conducted under the Democratic Pluralism Initiative have contributed to incremental improvements in reforming Russia’s legal and judicial institutions, and they are beginning to help build a grassroots constituency for legal reform. Through an interagency transfer to the State Department and a grant to the American Bar Association, USAID supported Russia’s limited reintroduction of jury trials and its first steps toward establishing an independent judiciary, as well as commercial law training for the Russian high arbitration court. By the end of 1994, jury trials were operating in 9 of 89 regions in Russia, and the government had enacted legislation intended to increase the independence of the judiciary and to make many other reforms in the criminal justice system. However, the widespread reintroduction of adversarial jury trials was often not occurring as scheduled because the Russian Federation and the regional governments did not fund their implementation adequately, citing budgetary constraints. By the end of 1994, Russian judges were only beginning to assert their independence from other branches of government. In late September 1993, USAID awarded a $12.2 million, comprehensive rule of law project to ARD/Checchi to continue support for these reform efforts over a 3-year period, expand them to develop other Russian legal institutions, and encourage grassroots constituencies for legal reform. The project aimed to assist in the development of Russian legal institutions by supporting curriculum changes in Russian law schools, including the addition of commercial law courses and new substantive and procedural code reforms into the curriculum, establishing continuing education for bar associations, providing training to all judges of commercial law courts, and strengthening the new Constitutional Court and the role of the defense counsel in criminal cases. The contractor began to provide assistance in these areas in late 1994 and early 1995. As part of its contract, ARD/Checchi also awarded a $500,000 subcontract to FTUI to support efforts to address workers’ rights issues through Russia’s court system, and it is managing a $2-million small grants program to support U.S. and Russian nongovernmental organizations’ activities to promote the rule of law. As of May 1995, five grants had been awarded under this program. ARD/Checchi was slow to initiate its core project activities. According to USAID officials, the approximately 1-year delay in starting the project was due partly to the inability of U.S. embassy and USAID officials to respond to the contractor’s proposed action plan and to clearly articulate what they expected of the contractor. In addition, it took some time for the contractor to (1) establish contacts and design projects with Russia’s historically closed legal institutions and establish an office in Moscow, (2) become familiar with USAID’s administrative procedures, (3) and negotiate and award contracts and grants to other nongovernmental organizations. It is too soon to evaluate the effectiveness of the contractor’s efforts to support reform of the core legal institutions; however, officials from USAID, the State Department, and the Russian government said that systemic changes in Russia’s legal institutions will be a long-term process, given that the needs in this area are vast and complex. (See app. VI.) U.S. assistance projects intended to strengthen civilian control of the Russian military, including the International Military Education and Training (IMET) program and a USAID-funded Atlantic Council project, had not made much progress in addressing their goals, primarily due to a lack of interest by the Russian government. U.S. embassy data shows that from 1992 through 1994 the IMET program brought 37 civilian and military officials, primarily from the Ministries of Foreign Affairs and Defense, to the United States for training. However, according to the U.S. official responsible for managing the program, civilian candidates have been chiefly mid-level bureaucrats from the Ministry of Foreign Affairs who are not likely to advance to positions of authority. He indicated that the Ministry of Defense is leery of the program and has limited the participation of Russian military officers. The Atlantic Council’s 2-year, $626,500 grant from USAID, started in 1992, was hindered by the Council’s inability to identify and select Russians to participate in its training programs. According to DOD, the IMET program is primarily a long-term effort to influence the younger, promising officers of foreign militaries who will rise to positions of prominence during their careers. However, evidence indicates that little progress had been made in identifying and selecting promising officers who are likely to rise to positions of prominence, apparently because the Russian government was unwilling to fully use the IMET program. U.S. embassy officials told us that the Russian military retains firm control of its sphere of operation and that few in-roads have been made to exert greater civilian control. According to one embassy official, even the Russian Parliament has limited detailed knowledge of the military budget. Uniformed officials are also predominant at the Russian Ministry of Defense. U.S. embassy officials said that political circumstances in Russia make the implementation of a U.S. civil-military program in Russia very difficult. There is general antagonism to Western assistance from some quarters of the government and some suspect that civil-military assistance is designed to further weaken Russia militarily. Also, deep cuts in defense spending have made the process of greater civilian control in the Ministry of Defense more complex, as significant hiring of civilian employees is unlikely, especially with the large numbers of currently unemployed military officials. (See app. VII.) When USAID began providing democratic development assistance in Russia, it did so without conducting needs assessments or developing a country strategy, as it was under considerable pressure to implement projects quickly. Instead, USAID relied on unsolicited proposals that largely replicated democracy assistance programs underway in Central and Eastern Europe, using many of the same contractors and grantees. State and USAID officials now believe that democratic reforms in Russia may not be as easily or quickly consolidated as they had originally hoped. They are now focusing less on assisting national institutions and short-term political events, such as elections, and are emphasizing more long-term development of local, grassroots organizations capable of building a popular consensus for democratic reform. According to these officials, this means that it may be desirable for the United States to continue democratic development activities in Russia after assistance in the economic reform arena has ended. USAID, USIA, and DOD generally agreed with our report, but they suggested minor changes that were incorporated where appropriate. State said that we should have discussed how the Department of Justice’s rule of law program and DOD’s exchange program conducted under the Cooperative Threat Reduction program contribute to democratic development. We agree that law enforcement assistance can contribute to democratic development; however, the Department of Justice’s project had not begun at the time of our fieldwork in Russia. (We are currently evaluating this project as part a review of U.S. anticrime assistance to the former Soviet Union.) According to DOD, the Cooperative Threat Reduction program is not democracy-related, although it does occasionally fund some democracy-related activities. NED agreed with our general conclusions but said we should have included projects by the Center for International Private Enterprises in our review because the Center’s projects also helped build a constituency for free market democratic reforms. Although the Center’s projects may have contributed to democratic reforms, the primary focus of the Center’s projects was to promote privatization and promarket reforms, two areas outside the scope of our review. NED also provided us with written comments of NDI and IRI, two of NED’s core institutes that have operated in Russia primarily as USAID grantees. Both NDI and IRI indicated that the development of reformist political parties in Russia may take many years. NDI said that its programs have produced positive results and that by their nature, these programs are often long-term investments in individuals, institutions, and processes. IRI said its approach has been to help those Russians dedicated to democracy begin to build democratic parties up from the grass roots. However, evidence indicates that little progress had been made toward the development of reformist political parties despite NDI and IRI’s efforts. Comments from State, DOD, USAID, USIA, NED, IRI, and NDI are reprinted in appendixes VIII through XIV, respectively. We used the State Department’s definitions to determine which assistance programs were democracy-related. These programs included civic education and organization, civil-military relations, human rights training, election reform, media training and development, and legislative, rule of law, political party, trade union, and public administration development. Our scope was limited to an evaluation of projects in the areas of independent media, rule of law, political party development, trade union development, electoral assistance, and civil-military relations. We interviewed numerous U.S. government officials in Washington, D.C., who manage and coordinate their agencies’ democracy assistance to Russia, specifically, officials from the State Department, USAID, DOD, and USIA. We also met with NED officials and officials from NDI, IRI, FTUI, IFES, Internews, ARD/Checchi, the American Bar Association, and the Eurasia Foundation. We reviewed (1) agencies’ strategy papers, program documents, project evaluations, and budget data and (2) grantees’ internal documents, such as trip reports, and their official reporting to the U.S. government on the status and impact of their projects. We also verified the scope of work of some nongovernmental organizations—both Russian and American—that received selected small grants from the Eurasia Foundation and NED. We visited five Russian cities, in addition to Moscow and St. Petersburg—one north of St. Petersburg, two in the Black Sea region, and two in southwest Siberia. While in these cities, we met with U.S. embassy and agency officials who manage and coordinate democracy projects, as well as in-country staff of USAID and USIA contractors and grantees who implement the projects. In addition, we interviewed Russian government officials in the presidential administration, government ministries, and the State Duma to obtain their assessment of U.S.-sponsored democracy projects. We also interviewed numerous Russians who received U.S. training, technical assistance, and financial or material donations, including judges, legal administrators and practitioners, political party organizers, activists and candidates for local elections, union leaders and members, station managers, broadcast technicians, print journalists, election officials, and representatives from women and youth groups. Also, we attended a number of meetings, seminars, and training sessions held or organized by the U.S. contractors or grantees to observe their activities. We did not evaluate USAID’s public administration and nongovernmental organization support projects. We also did not evaluate the University of Maryland’s and the Harvard Institute for International Development’s legal reform activities because they are sponsored under USAID’s Economic Restructuring Project rather than its democracy initiative, although we recognize the link between these programs. We did not review the effectiveness of democracy-related USIA and USAID exchange and visitor programs due to the difficult and time-consuming task of locating individual program participants. We conducted our review between March 1994 and September 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of State and Defense, the Administrator of USAID, the Director of USIA, the President of NED, and the Chairmen and Ranking Minority Members of the appropriate congressional committees. We will also make copies available to others upon request. Please call me at (202) 512-4128 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix XV. U.S. democracy assistance to Russia includes projects funded or implemented by a number of agencies, including the U.S. Agency for International Development (USAID); the Department of Defense (DOD); and the U.S. Information Agency (USIA) through its annual grants to the National Endowment for Democracy (NED). Table I.1 summarizes these programs. Table I.1: Estimated Obligations for U.S.-Funded Democratic Development Assistance in Russia, Fiscal Years 1990-94 Only includes democracy-related Eurasia grants, such as those related to media, the nonprofit sector, and governmental reform. This figure is the portion of USAID’s exchanges project used to support democracy projects. As indicated in table I.1, the majority of USAID democracy-related funding in Russia involved grants awarded under its Democratic Pluralism Initiative for the New Independent States. Major activities funded by USAID in Russia under this initiative include efforts to develop and strengthen independent media, new democratic trade unions, reformist political parties, laws and legal institutions, election processes, local government, nongovernmental organizations, and civilian control of the military. Some activities under this initiative are implemented through transfers from USAID to other agencies, such as to USIA for journalist training and to State for rule of law activities. Other USAID democracy-related activities in Russia not part of the initiative include funding for the Eurasia Foundation, which awards small grants for media, public administration, and other projects, and an exchange program USAID uses to support all its assistance projects. DOD shares program management for the IMET program with the State Department. The Secretary of State is responsible for the program’s general direction, and he also recommends funding levels for congressional approval and allocates approved funds to each country. The Secretary of Defense is responsible for planning and implementing the program, including administration and monitoring, within established funding levels. NED provided about $8.8 million from its annual USIA grants for democracy-related activities in Russia from fiscal years 1990 through 1994. Of that amount, about $6.4 million was spent on activities implemented by three of NED’s four core institutes—the National Democratic Institute, the International Republican Institute, and the Free Trade Union Institute—which have also received significant USAID funding. This figure is somewhat overstated because the Free Trade Union Institute’s figures for fiscal years 1990 through 1992 include funds for all of its activities in the former Soviet Union; based on the data available, we could not estimate the amount of funds that were spent on activities in Russia. The remaining NED funds were for its small grants program, which from fiscal years 1990 through 1994 included 64 grants ranging between $10,000 and $100,000 in support of human rights, civic education, public advocacy, and media projects. NED’s fourth core institute, the Center for International Private Enterprise, spent about $572,000 in Russia from fiscal years 1992 through 1994, primarily for grants to governmental and nongovernmental organizations that seek to promote privatization and promarket reform. Other USIA programs in Russia currently involve a wide variety of exchange programs and educational and cultural activities, many of which are intended to directly support Russia’s transition to democracy. USIA told us it does not maintain specific country funding information because its activities were funded under regional projects. The purpose of the U.S.-funded independent media program in Russia is to ensure the quality and self-sufficiency of nongovernment or independent media organizations so that the Russian people have access to truthful information and a forum for open expression. U.S. media projects seek to raise the reporting skills of journalists, provide training in business and marketing to media managers, donate equipment and broadcast material, and facilitate sharing of news information. We reviewed USAID’s 3-year, $4.9 million grant to Internews Network; USIA’s 3-year, $600,000 grant to the Russian-American Press Information Center (RAPIC); and a number of small grants awarded to Russian nongovernmental media organizations by NED and the Eurasia Foundation. Overall, we found that the independent media program has helped expand and raise the quality of news reporting throughout the Russia Federation. Nonetheless, independent media in Russia remains insecure, as the difficult economic environment limits advertising revenue while political intimidation against the media continues to be exercised by some regional and local authorities. The growth of independent media in Russia began in 1990 during the Soviet era with the official abolition of press censorship. The new openness created a conducive environment for independent news reporting, as print and broadcast media organizations, still largely state-owned at the time, frequently aired views highly critical of the Communist government. Currently, most media organizations operate unhindered by the Russian government and many are privately owned. The principal threat to media independence in Russia today is the weak economy, according to U.S. and Russian officials and State Department reporting. For many organizations, advertising revenues are insufficient for continued survival, forcing them into bankruptcy or into joining larger affiliates, thereby curtailing their independence and capacity to produce their own programs. Print and broadcast organizations also face pressure from local political authorities or from organized crime. Media organizations are susceptible to such pressure because of their dire financial situations and because many occupy city-owned premises, receive subsidies, or depend on government-owned enterprises for supplies. Media coverage of the conflict in Chechnya was remarkably open, as views highly critical of the government were aired by both state and privately-owned television stations and newspapers. Nonetheless, according to the State Department’s 1994 Human Rights Report, the Russian government limited access by journalists to some areas of the conflict, claiming the need to protect military secrets and ensure journalists’ safety. The purpose of the Internews project is to aid in the establishment of an independent, self-sustaining television news network to facilitate alternatives to state-owned television. The project is part of a regional grant that also includes activities in Armenia, Azerbaijan, Belarus, Georgia, Kazakhstan, and Ukraine. Project components include (1) journalist and management training programs; (2) equipment procurement; (3) production and distribution of a weekly news program utilizing news reports from local stations; (4) production of public affairs documentaries; and (5) acquisition and distribution of low cost, quality programming to participating stations to raise viewership and advertising revenue. Our review of the Internews project indicates that it has made a significant contribution toward achieving its purpose. Internews is the only organization of its kind operating in the former Soviet Union to assist fledgling local independent television stations. The technical assistance, training, and programming that Internews provides have helped some local stations to become commercially viable, according to U.S. officials and Russian participants. According to Internews officials, Internews has also helped many stations that have not achieved full commercial viability by providing enough support to forestall them from entering bankruptcy, signing into sponsorship arrangements, or becoming affiliates of larger networks. Of the 500 over-the-air broadcasters that currently operate in the former Soviet Union, 200, the majority in the Russian Federation, have an active working relationship with Internews. At the national level in Russia, privately owned broadcast companies, such as NTV and TV6, have emerged and challenged the dominance of the state-owned national broadcasting companies. While the state-owned companies are only minimally supervised by the government, the privately owned broadcast companies nonetheless provide competition and diversity to state broadcasting, particularly in Moscow and other large urban centers. According to State Department and Internews officials, television stations at the regional level, some of which were once part of the central broadcasting system of the Soviet-era, now operate more or less independently. They can choose affiliation to one of the state or private national networks and can use material from these networks or produce their own local programming. The ability of these stations to produce their own local programming provides diversity to news and information, which traditionally has been Moscow-centric. According to Internews, the greatest hindrance to the development of independent local television is the unstable economic situation. Additionally, all local stations depend to some extent on local political authorities. The Internews project has helped strengthen local independent television stations through the following activities. Conducting over 60 training programs in journalism, station management, advertising, and other commercial survival skills to over 2,000 station personnel and journalists. Russian participants told us the training sessions were very beneficial and were state-of-the-art. They said the training was hands-on and relevant to running a modern, commercially viable television station. Providing grants of video equipment to stations that either lack or have outdated video technology and making available production equipment free of charge to support Russia’s indigenous documentary film industry. Organizing a network of over 110 independent television stations throughout Russia and neighboring countries, which helps pool limited programming resources. Coordinating production of Local Time—a weekly half hour news program. This program is distributed to any interested local station and is estimated to reach an audience of 100 million people in five countries. As of April 1995, Internews had produced over 110 episodes, with over 40 local stations in Russia alone contributing stories from their regions. Producing several docudramas—What If—on topical legal and political issues, including commercial law, civil law, privacy rights, and private property rights. Acquiring and distributing quality Western and domestic documentary programming to over 170 Russian stations free of charge to attract viewers and advertisers—thereby increasing the economic viability of these stations during this transition period. Linking more than 20 regional independent stations in a computer-based electronic mail network for purposes of editorial coordination and information exchange. As the Internews project was about to end at the time of our fieldwork, USAID awarded a new $10 million, 3-year consortium grant to Internews and RAPIC to implement a media partnership program. The program will place U.S. media organizations in association with Russian counterparts to facilitate the transfer of U.S. management expertise, training, equipment, and other resources. However, it was too soon to determine the effectiveness of this project. The objective of the RAPIC grant, which is funded by USIA, is to develop a stable, profitable press in Russia. Elements of RAPIC’s program include (1) management training workshops, (2) journalist training seminars, (3) establishment of regional centers to serve as information clearinghouses, and (4) sponsorship of press conferences to provide a forum for an exchange of ideas. Our review of this grant indicates that RAPIC’s regional centers have helped strengthen the print media in the regions they serve. The centers provide training to Russian journalists, access to wire services and on-line data bases, and serve as a meeting place for print and broadcast journalists and a forum for press conferences on a variety of topics, including politics, economics, science, and the arts. According to U.S. officials and Russian journalists we met with at RAPIC centers in Moscow, St. Petersburg, and Novosibirsk, the quality of journalistic reporting has increased notably, especially among the small, regional newspapers. These officials credited the training journalists received and the access to new sources of information by RAPIC for the improved reporting. They told us that while the quality of journalism in Russia is still in need of improvement, newspapers are reporting news in a more objective fashion. The Eurasia Foundation and NED provided small grants, ranging from $10,000 to $110,000, to Russian and U.S. nongovernmental organizations for institutional training and budgetary support to Russia’s grassroots media organizations. They also financed specific media projects that provide a prodemocracy angle to Russia’s economic and political reform process. Some of the grants funded by the Eurasia Foundation were made to Freedom Channel, for a three-part television series on the dangers and Duke University, for several projects of the Commission on Radio and Television Policy, including production of a media policy guidebook and an exchange program; Freedom Channel, in conjunction with Persona, an independent Russian television production company, for the development and broadcast of programming related to economic reform and prodemocracy topics such as conflict resolution and freedom of speech; Globe Independent Press Syndicate, for the “Freedom Link Computer Network” that provides international sources of information to regional newspapers in Russia by electronic mail; and KSKA Anchorage, for a training program on radio and television production, basic journalism and communication, and business practices for managers and reporters from radio and television in the Russian Far East. Some of the grants funded by NED went to New Times, for a series of articles exposing the threat of Russian nationalists, fascists, and other extremist organizations and increasing the appeal of democratic solutions to Russia’s problems; Panorama, for the research, publication, and the maintenance of a database on political organizations and political personalities throughout the former Soviet Union; Express Chronicle, an independent Russian-language weekly newspaper published in Moscow that specialized in human rights reporting; Globe Press Syndicate, for a syndication service that provided small regional newspapers with prodemocracy news and more varied and detailed information about political, economic, and social changes taking place in Russia; and Freedom Channel/Persona, a joint American and Russian television project, for the production of prodemocracy documentaries on such topics as conflict resolution and freedom of speech. The United States helped Russia improve its election administration through a USAID grant to the International Foundation for Electoral Systems (IFES). This 3-year, $10.7 million regional grant enabled IFES to work in any country of the former Soviet Union, provided it has U.S. and host country approval. In Russia, the IFES project objectives were to help make elections free and fair and increase public participation. IFES conducted a pre-election technical assessment in Russia in June 1993. It subsequently served as a key advisor to Russia’s Central Election Commission (CEC) prior to the December 1993 elections. Since then, IFES has been working to help Russian organizations rectify many of the legal, institutional, and administrative shortcomings made evident during the elections. IFES has made several important contributions to improve Russia’s electoral administration structure, including contributing to passage of Russia’s Voting Rights Act in November 1994 and more recently legislation governing elections for the State Duma (the lower house of the Russian Parliament). These laws establish a permanent and more independent election commission, as well as voting rights and Duma election procedures based in law. This situation compares favorably to the situation in December 1993, when elections were held by presidential decree, CEC members were appointed by the President, and the electoral process and administrative apparatus were holdovers from the Communist era. IFES has also worked with the CEC in developing electoral training and voter education programs to ensure that electoral procedures are properly carried out and to increase the public’s knowledge and participation in elections. Nonetheless, despite these efforts, IFES officials believe that more needs to be done to ensure that future elections in Russia will be free and fair. As of our review, legislation governing elections for the upper chamber of Parliament or for regional and local political bodies still had to be passed. Moreover, newly passed laws and procedures still had to be applied and tested in practice. Countrywide local elections held in 1994 raised concerns about future national elections, as these elections were marked by many irregularities and low voter participation. IFES provided advice and equipment and coordinated international observers for the CEC prior to the 1993 elections but did not have much of an impact on how the elections were administered. According to U.S. officials and IFES reporting, the elections displayed several shortcomings.For example, the CEC lacked independence, particularly from the presidential administration. Presidential decrees continually undermined CEC decisions, and after the election it was the presidential administration, rather than the CEC, that controlled the ballots and first announced the election results. There were also problems stemming from ballot security, incomplete and inconsistent election regulations, insufficient election commodities and technology, and inadequate oversight of campaign finances. IFES’s limited impact was due to the difficult political circumstances in which the elections were held and the short period of time available to address shortcomings in the electoral system. The December 1993 elections were called in September 1993, in the midst of a violent standoff between the executive and legislative branches. No legally established, independent apparatus existed in Russia to administer national elections, as President Yeltsin simply appointed the CEC by decree, while leaving intact 88 regional, district, and local commissions that were holdovers from the Soviet era. Such commissions remained closely tied to local political and administrative bodies, themselves little changed since the Soviet era. As IFES pointed out in its technical assessment published in November 1993, Russia’s election administration system suffered from numerous problems on the eve of the December 1993 elections, including weak mechanisms to protect against ballot and electoral fraud and a Russian populace with no experience in multiparty voting. Since the 1993 elections, IFES has worked to ensure the CEC’s independence and strengthen Russia’s election administrative processes. Examples of IFES project activities follow: Providing advice and written commentaries on electoral legislation and other election-related initiatives. In November 1994, Russia passed a Voting Rights Act, which established the CEC and the regional commissions as permanent, legal bodies and ensured political balance in the appointment of commissioners. Since the passage of the act, a new CEC has been appointed. The act reflected IFES’s recommendations and included provisions on ballot security, publication of election results, regulations on campaign financing, and mechanisms to improve oversight of local commissions. Its advice and comment were also reflected in recently passed legislation governing elections to the State Duma and to draft laws on presidential elections, public referenda, and local elections. Helping design and institute a training program for election officials and poll workers to ensure the application and enforcement of new legislation. Organizing conferences with CEC, State Duma members, presidential administration officials, and political party leaders to discuss the role of the CEC and the rights and responsibilities of political parties under the new election laws. Holding roundtable discussions on such topics as ballot security, polling procedures, grievance adjudication, and reporting election results. Assisting in the establishment of an electoral archive in order to create an institutional memory of elections in Russia. Designing and implementing a national voter education program in conjunction with the CEC, Ministry of Education, and the media, to provide voters with nonpartisan election information. CEC officials indicated they value their collaboration with IFES and hope it continues. Although they believe they have made progress in improving Russia’s electoral administration system, they said it would take the experience of holding many elections before elections would run smoothly. U.S. and IFES officials also agree that more work is needed to improve electoral administration, including the passage of additional electoral legislation and assurances that such legislation will be appropriately applied. For example, despite the activities and accomplishments of the CEC and IFES, local elections held throughout Russia over the past 18 months have not fared well. According to IFES officials, the winners of these elections regularly included the heads of local administrations—who were often responsible for the organization of the elections. U.S. financial support for the development of democratic trade unions and support for workers’ rights in Russia was provided through NED and USAID grants to the Free Trade Union Institute (FTUI) of $5.3 million for activities from 1990 through 1995 and $7.7 million for activities from 1992 through 1997, respectively. The purpose of U.S. support of democratic trade unions is to give workers a means of participating in the new political and economic environment. According to program documents and U.S. and Russian officials, if workers are not given a voice during this transitional period and believe that free markets and democracy only work to their disadvantage, then they could pose a threat to social peace and political and economic development. Trade union development assistance in Russia has helped increase the size and effectiveness of democratic trade unions. FTUI provided important support for democratic trade unions early in their existence, at a time when unions were challenging the Soviet system. The first independent, noncommunist unions in the former Soviet Union arose in the mining regions of Siberia in the late 1980s. These unions backed Boris Yeltsin and Democratic Russia and other reformist groups and played a key role in the breakup of the Soviet Union. FTUI supported these unions by providing them with equipment and training in Russia and the United States. FTUI’s support for democratic unions since the breakup of the Soviet Union has helped increase the quality of Russian unions through an extensive education program. It has achieved some success in increasing the size of some unions, assisting in the formation of regional and national union confederations, and increasing the public’s and government’s knowledge of worker and union issues. It also financially supports increasingly effective efforts to address workers’ rights issues through Russia’s court system. FTUI’s efforts to help form or strengthen new democratic unions, nonetheless, have been hampered by the continued control the successors to the official Communist trade unions have over workers, as well as by the inexperience and isolation of democratic unions, the apathy of Russian citizens, and the weakness of the economy. The old official unions remain the largest unions in Russia. During the Soviet era, they were inseparable from the oppressive party and state apparatus and until the final years were the only unions allowed. They retain many of their assets, and so are less dependent than the democratic unions on collecting dues. They also still exert control over many workers through their continued ability to dispense social welfare benefits in some locations. The old official unions have been less receptive to reform; for example, they supported the leaders of Parliament in their efforts during the fall of 1993 to overthrow President Yeltsin. The independent Soviet workers’ movement began as a mass movement in the summer of 1989. With NED funding from 1990 through 1992, FTUI established relationships with and provided financial and other support for the most important independent or democratic unions in Russia, including the Independent Miner’s Union, the Seafarer’s Union, and Sotsprof. Initially, mine workers were the largest source of independent trade union activity in the Soviet Union, with independent miners’ unions generally aligning themselves with the new government of the Russian Republic, led by Boris Yeltsin. During the miners’ strike in the spring of 1991, miners repeated demands first raised in 1990 for radical changes in Soviet political and economic life, including the resignation of top Soviet leaders, and forced the government to cede power from Moscow to republic-level coal ministries. Following the 1989 strikes, FTUI provided the Independent Miners’ Union and other independent unions with equipment, training, and technical advice and brought independent union leaders to the United States for training. Officials we met with representing some of Russia’s first independent or democratic unions, which today comprise the largest democratic unions, told us they greatly appreciated and benefited from FTUI’s support during the early days of their unions’ existence. At the time of our review, many of the early union leaders who were supported by FTUI were working in the government, where, according to union officials, they were attempting to get the government to address labor issues. According to State Department reporting, the growth in independent trade unions occurred as the Soviet Union’s Supreme Soviet and later the Russian government passed laws that formally established the right to strike, improved the legal conditions for independent trade unions, and provided for the right of workers to form or join trade unions. However, increases in the size and number of independent trade unions were slowed by an economy in crisis; legal harassment and physical violence related to union organizing activities, including threats and intimidation from enterprise management, who, according to free union officials, had the passive support of nonindependent union officials and local politicians; and official trade unions maintaining effective, day-to-day control over the social insurance fund, from which it dispersed benefits such as workers’ vacations and sick pay. Independent union leaders considered the continued control of the insurance fund by official trade unions the biggest obstacle to establishing independent unions. Beginning in 1992, FTUI’s direct support for unions, education, outreach and information dissemination, and legal assistance programs have made varying contributions to the development of new, democratic labor unions. However, two of FTUI’s activities—specifically the research activities of the Russian American Foundation for Free Trade Union Research and a grant to a human rights organization for a media project—did not make significant contributions to either trade union development or workers’ rights. FTUI’s NED-funded direct support for unions includes training union organizers, subsidizing the salaries of staff and organizers, and providing equipment. After early successes with the Independent Miners’ Union from 1990 through 1992, FTUI’s assistance for union organizing slowed during 1993, as FTUI staffers were focused on starting their USAID-funded program and the Russian director of the organizing activities became ill. However, during the first quarter of fiscal year 1994, FTUI-assisted organizers participated in 40 registration campaigns, helping organize over 3,000 new members for various unions in five different regions. Currently, these organizers are focused on training a cadre of Russian organizers to work directly with unions. In addition to providing training for union organizers, FTUI directly supports unions by paying the salaries of Russian staffers or interns at several national trade union structures who have, among other things, helped organize unions in several regions, advised unions on draft legislation, and devised wage provisions for tariff agreements. According to FTUI, the efforts of one intern to revise the charter of the Independent Miners’ Union directly led to the doubling of that union. In addition, FTUI has spent up to $20,000 per quarter since 1991 on donations of computers, fax machines, and other office equipment for the Independent Miners’ Union and other unions. We observed FTUI-donated equipment at the headquarters of a regional affiliate of the miners’ union in Siberia and found that it was in good working condition; officials described it as essential to their operations. FTUI has directly supported the largest of the independent labor unions, including Sotsprof (about 300,000 members), the Confederation of Maritime Workers (about 86,000 workers), and a regional affiliate of the Independent Miners’ Union of Russia (about 95,000 members). According to FTUI and U.S. officials, the independent labor movement grew to between 3 million and 5 million workers by late 1994, out of a total workforce of 60 million to 75 million, the large majority of which belonged to unions. About 2.2 million members of the independent labor unions are part of the Mining and Metallurgy Union, which broke away from the old official trade union in 1993; this union has received FTUI’s help in its reform efforts since then. According to State Department human rights reports, growth in the democratic workers’ movement during 1991 and 1992 resulted in several hundred union-like organizations forming across Russia; however, most were small and served more as workers’ associations and did not appear to carry out traditional labor activities. A FTUI official told us that many of the unions that FTUI helped register over the past few years were not true trade unions because of their small size. Under Russian law, organizations can register as unions with membership as small as 15 people. At the time of our visit, FTUI was beginning to explore ways of helping these small organizations become larger, viable unions. In 1994, the majority of Russian workers still belonged to the old official union, the successor to the Soviet-era Communist union center, even as the membership declined from 65 million to 50 million. Despite the loss of members, the old official union retained its historical influence with the government and enterprise management, as well as many of the privileges and control mechanisms that existed in the Soviet era. The decision of the Mining and Metallurgy Union to split from the old official union raises an issue of whether FTUI should work with this union to facilitate additional splits. FTUI generally opposes working with the former Communist party unions because it believes these unions are not reformable and any relationship could undermine its work with proreform unions. FTUI believes it is more effective to build new union structures rather than attempt to reform the old official unions, as these unions are led by enterprise managers and former Communist party functionaries. Additionally, FTUI officials said that the Mining and Metallurgy Union is unique in supporting reform and still has a long way to go in changing the way it operates. A U.S. embassy official we spoke with was sympathetic to FTUI’s position; however, this official believed there may be some opportunity for FTUI to facilitate splits within the old official Communist union. Despite FTUI’s position against working with the old official union, since 1992 FTUI has provided the American Federation of Teachers with $160,000 to, among other activities, help members of the old official union democratize their affiliated unions or to form or join independent trade unions. However, FTUI officials recognized many obstacles to getting more unions to break away from the official trade union. In commenting on this report, USAID said that it fully supported FTUI’s opposition to working with the official trade union. USAID believes that FTUI’s strategy of building new union structures is the appropriate course of action. FTUI helped independent trade unions to improve their operations through an extensive education program for union leaders and members. This program was implemented by FTUI in 1991 and 1992 with NED funding. Since 1993, the education department of the Russian-American Foundation for Trade Union Research and Education, an organization FTUI created with USAID funding, has managed this project. From 1992 through 1994, about 15 seminars and conferences were held in about 8 cities, covering issues such as collective bargaining, protecting workers’ rights in the courts, and union organizing. Union members we met praised FTUI educational seminars and conferences, particularly those focusing on legal issues such as how to favorably resolve employer-employee disputes. FTUI’s outreach activities—funded by USAID and NED—entail frequent trips by FTUI staff to various parts of Russia to meet union leaders, introduce them to FTUI programs, and provide informal consultations. According to project documents and independent union officials, these activities resulted in the formation of cooperative relationships between Russian unions and international confederations and in independent unions forming regional and national confederations. For example, FTUI staff facilitated the formation of the Confederation of Maritime Workers, which includes the dock workers, seafarers, and port workers unions. The confederation has a combined membership of about 86,000. FTUI also provides information to the public and government on union and labor issues. For example, a labor newspaper is published by the Prologue Society with funding from FTUI through its NED grant that has reached a circulation of 60,000 and is distributed throughout most of the country. Its readership includes members of the Parliament and the Kremlin, where, according to an FTUI official, articles from the newspaper are included in President Yeltsin’s daily news clippings. While union members we met were divided on the usefulness of the paper’s coverage, the leader of the Independent Miners’ Union said the newspaper plays a key role in his union’s media campaign. Using USAID funds, FTUI also provides public information through the Russian-American Foundation for Trade Union Research and Education. The foundation has press correspondents in about 36 locations who write articles for local papers. The foundation distributes press releases to two Russian news agencies and reports and press clippings on trade union activities to trade unions throughout the country. It also uses its material on a popular radio program and a television program. With funding from USAID, FTUI supports the Glasnost Defense Fund in its production of a twice weekly radio program on workers’ rights and free trade unions. The Glasnost Defense Fund is a major human rights organization in Moscow that focuses on press freedom. The Fund selected five large industrial cities—four in Russia and one in Kazakhstan—to tie into a electronic mail network. However, the Fund director told us that only a minor portion of the half hour show is spent on worker and union issues because listeners are more interested in other issues of local concern. He said the activity contributes more to an independent media than to union development because it attempts to make local broadcast stations less reliant on central authority for material and more responsible for their own operations. The Russian-American Foundation for Trade Union Research and Education initially had problems managing its research component, though some improvements were evident by late 1994. Foundation research is supposed to inform unions of the economic, social, and legal aspects of the workers’ movement to help unions better represent their interests at the national and local levels. However, FTUI officials told us that during 1993, the foundation’s first year, little research was actually done because many of the researchers assisted the foundation’s former director in trying to form a political party, not in doing research. Moreover, FTUI officials told us that what little research had been completed had to be edited extensively because it was too theoretical. Nonetheless, after the foundation’s director was replaced and more of the research and writing were done on a contract basis, a number of practical brochures on union organizing and management were finally published. One brochure, entitled “Legal Bases for Negotiating and Collective Bargaining”, went through two printings due to high demand. FTUI funded these brochures and the foundation’s other research activities from its USAID grant. Using NED and USAID funds, FTUI supported two labor law centers that have helped improve access to the legal system for trade unions and their members. USAID’s rule of law project first identified the potential for these centers through a USAID-funded needs assessment. In early 1994, FTUI began establishing the centers using $195,000 received from NED. In August 1994, FTUI provided about $465,000 (USAID rule of law funds) in grants to the centers to cover their operational costs. One of the centers is part of the Russian-American Foundation for Trade Union Research and Education in Moscow and the other is based in Yekaterinberg. The two centers, which together have 8 full-time lawyers, supplemented by volunteer work from law students, successfully litigated about half of the 50 cases they had brought to court at the time of our review. Most of the cases involved violations of workers’ rights, such as illegal firings or breach of bargaining agreements. In addition to providing pro bono litigation services, the centers support unions by participating in collective bargaining negotiations and in the foundation’s educational programs and by providing unions with materials on legal issues. According to the State Department, independent trade union officials were increasingly aggressive in pressing their cases in the Russian courts during 1994, with increasing rates of success. Union officials told us that the legal aspects of union organizing and management was especially important, and they praised the services provided by the FTUI-supported law centers. U.S.-funded political party development programs in Russia had not significantly strengthened reformist national political parties, either organizationally or in terms of increased membership or performance in elections. The U.S. government has supported the development of political parties in Russia through NED and USAID grants to the National Democratic Institute (NDI) and the International Republican Institute (IRI). From 1990 through 1992, NED provided $956,000 to NDI and IRI to help the anti-Communist Democratic Russia Movement establish a printing facility and disseminate literature. NED’s funding also enabled NDI and IRI to conduct civic education and grassroots organizing programs for Russians at the national and local levels. In addition, NED provided NDI and IRI $200,000 to monitor the April 1993 national referendum and to send Russian party leaders to the United States for training. Beginning in 1992, USAID awarded NDI and IRI a series of grants that totaled about $17.4 million to conduct political party development programs in Russia through 1997. According to USAID documents, the overall objective of these grants was to help reformist political parties strengthen their organizational structures and their role in elections, Parliament, and local government. The grants were also intended to strengthen reformist parties indirectly by providing support to civic organizations and encouraging them to work with parties and by monitoring elections and promoting public participation in politics. NDI and IRI held numerous seminars and training activities for party leaders in Moscow and for activists in over 20 cities and regions, prior to the April 1993 referendum and the December 1993 national elections. Nonetheless, reformist political parties performed poorly in the December 1993 elections. In Russia’s inhospitable environment for political party development, NDI and IRI were able to develop extensive relationships with party officials and provide training and assistance. However, despite the institutes’ work, reformist parties have been either unwilling or unable to form broad-based coalitions or build national organizations and large segments of the Russian public have not been receptive to their political message. NDI and IRI officials acknowledge that the Russian environment is difficult for political party development; however, they believe that their programs are important for furthering Russia’s democratic development. NDI and IRI noted that the development of a strong multiparty system has been made more difficult by Russia’s lack of democratic traditions, the Communist party’s 70-year hold on Russia (a far longer span than in Eastern Europe), and the public’s general aversion to any organization characterized as a “party.” While NDI and IRI officials agreed that the environment is less than conducive to reform, they do not see this as a reason not to pursue political party development. In commenting on this report, USAID agreed that strengthening Russian political parties has been difficult. It said that consequently, since 1994 it has attempted to focus NDI’s and IRI’s programs by exclusively targeting them on six cities each. USAID believes that these targeted programs are having greater impact than earlier national efforts. USAID officials cautioned that expectations should not be too high and that its assistance would likely have only a minimal impact on the performance of the democratic parties during the December 1995 parliamentary election and the June 1996 presidential election. In early 1995, USAID foresaw a poor electoral showing by the reformist parties in Russia in the upcoming parliamentary and presidential elections and counseled NDI and IRI to direct more of their resources to working with grassroots nongovernmental organizations, thereby supporting the overall shift of the U.S. democracy program to developing a democratic civil society. From 1991 through 1993, NDI and IRI held multiparty seminars and single-party consultations throughout Russia. These seminars provided information on party organization and campaign techniques, and participants were given training videos and reference materials on U.S. parties and campaigns. For example, from late 1991 (with NED funding) and through 1992 and 1993 (with AID funding), IRI conducted party training in 19 cities from northern and western Russia to eastern Siberia. NDI provided training for the day-to-day organizers and managers of democratic political parties in Moscow and the regions. Despite these efforts, the December 1993 parliamentary elections, called just months after a violent standoff between President Yeltsin and Parliament, proved to be a disappointment for democratic and reformist parties. In the State Duma, the lower house of the Parliament, the Liberal Democratic Party of Vladimir Zhironovsky (which is neither liberal nor democratic) did best with 23 percent of the popular vote, and Russia’s Choice, the liberal reformist bloc headed by former prime minister Yegor Gaidar and nominally allied with President Yeltsin, was second with only 16 percent. In total, nationalist and Communist blocs won a plurality and outpolled proreformist blocs by 9 percent. Although Russia’s Choice gained the most seats of any party (66 party list and single mandate seats), reformist parties as a group won only 112 seats, not enough to control the 450-seat State Duma. Election observers cite a variety of reasons for the poor showing of the reformist parties. These parties had been declining in cohesion since the breakup of the Soviet Union in August 1991, principally because President Yeltsin postponed calling elections for a new Russian Parliament, leaving intact the Parliament that had been elected in 1990 during the Soviet-era. Without elections to focus their activities, Democratic Russia and other groups that had played such a large role in the collapse of communism failed to take the steps necessary for transitioning from opposition movements into political parties that could succeed at the ballot box. Democratic groups also declined in popularity from 1991 through 1993, as they were associated with the economic hardship being experienced. Consequently, the successors to the Communist party of the Soviet Union, the Communist and Agrarian Parties of Russia, staged somewhat of a resurgence. Numerous far-right, nationalist movements, such as the Liberal Democratic Party, also increased their organizational and popular strength during this difficult period of economic and social transition. According to numerous observers, the reformist parties made many strategic and tactical errors during the December 1993 elections, thereby compounding their weaknesses. For example, although Russia’s Choice ran with the Democratic Russia Movement, the reformists still ran as four separate parties or blocs, thereby splitting their votes. Also, notwithstanding NDI’s and IRI’s efforts, these parties pursued a Moscow-focused campaign strategy. They failed to reach out to or build regional organizations and to present clear, convincing campaign messages. Since the 1993 national elections, NDI and IRI have continued working with party activists throughout the country, encouraging the formation of coalitions and teaching organizational and campaign techniques. However, the situation for reformist parties since the December 1993 elections has only marginally improved. Some of them—such as Russia’s Choice, Yabloko, and the Party of Russian Unity and Accord (PRES), all participants in NDI and IRI programs—now recognize the need and have taken some steps to build national organizations. However, according to U.S. and Russian officials, these reformist parties’ organizational presence outside of Moscow remains weak, and they did not made significant gains in local elections that took place across Russia over the last 18 months. They remain Moscow-centered, highly fractionalized, and separated more by personal ambition than ideology. NDI and IRI officials acknowledged that reformist parties have remained weak, but they said that the institutes’ training programs since 1993 have increased the organizational capacity of some parties. An IRI official said that during 1994 and 1995 it trained about 3,000 party activists, many of whom returned for advanced training. This official told us that IRI’s approach has been to help those Russians begin to build democratic parties up from the grass roots, the necessary ingredient for a strong national organization. He said that Russia’s reformist parties have persisted in their efforts to build their organizations and field candidates despite the unpopularity of their free market message and historical negative view of “party” (a harsh memory from the days of Communist party control). According to an NDI official, in 1995 NDI observed markedly different behavior among parties with which it was working. NDI observed that the parties were targeting communication to voters based on demographic and geographic information from the previous elections; conducting research on voter attitudes through focus groups and polling; contacting voters through small meetings, coalitions with civic groups, door knocking, and leaflets; and relying on party activists who considered party organizing their full-time job. NDI also said that although a formal democratic coalition had not emerged for the December 1995 elections, there had been considerable coordination of candidates in single-member districts. NDI attributed this coalition building to its round table discussions on cooperation held in December 1994 and April 1995. NDI noted that coalition members in one city pledged to nominate one joint candidate in each single-member district for the December 1995 parliamentary elections. Evidence, however, suggests that successful coalition building has not taken place at the national level. For example, due to personality conflicts, the two large pro-reform election blocs of 1993, Yabloko and Russia’s Choice, had split into 11 different parties and movements by the December 1995 parliamentary elections. According to many Russian political activists and some U.S. officials, political party training will ultimately affect the development of Russia’s political parties only at the margins. The Russian political activists said that viable reformist political parties may only emerge after more than a decade, and their development will depend mostly on efforts to build a democratic civil society. That is, even if NDI and IRI can teach reformist parties how to campaign and organize effectively, they will only win elections when the Russian people are receptive to a reformist, democratic political message. U.S. officials said that the program can have an impact, although the impact will be narrow in scope due to the size and complexity of Russia and its politics. According to these officials, IRI and NDI projects are not expected to significantly influence the development of national political parties in Russia. According to NDI and IRI, although their party development programs are likely to affect Russia only at the margins, their services are in high demand and they do have a visible impact on numerous individual party officials or candidates who use their advice. Despite the difficult environment for political party development, NDI and IRI have developed contacts with thousands of democratic activists throughout Russia, regularly holding seminars and consultations and providing information and other materials. Numerous Russian officials and activists in several cities who had participated in NDI and IRI programs praised NDI and IRI training for increasing their knowledge of campaign techniques, bringing reformist parties together, and encouraging people who had never participated in politics to become political activists and candidates. They also said that NDI and IRI written materials were an effective means of communicating practical experience and that they wanted more, rather than less, assistance. Among these officials and activists were leaders of Russia’s Choice and the Social Democratic party, State Duma deputies from reformist parties, and an official at the Kremlin responsible for parliamentary affairs. Nonetheless, NDI and IRI have had mixed results in getting Russians to use their campaign techniques. Some Russian political activists cited examples of how they could adapt certain techniques to their campaigns; for example, one candidate told us that he followed NDI’s suggestion and developed a political map to target his campaign literature to people most likely to vote for him. Senior IRI and NDI officials stressed that their techniques are being used in Russia. They cited as examples reformists who won elections using local phone banks and door-to-door canvassing, despite initial reluctance by some. Many Russian political activists, however, told us that the training was not always applicable to Russia. For example, they said that some U.S. political or campaign practices such as phone banks and door-to-door canvassing cannot be fully used or were unsuccessful in Russia because of technological and cultural factors. An IRI official in Russia told us that he realized some U.S.-style campaign techniques would not work in Russia and that he was working to make IRI activities more relevant to the Russian context. A number of participants in NDI’s and IRI’s political party and civic advocacy programs indicated that to better promote democracy in Russia, the United States should support more civic education activities. The political party participants spoke favorably of U.S. support for sending Russians to the United States for training but said that NDI, IRI, or other U.S. nongovernmental organizations could work at schools or other Russian institutions to teach Russians the principals of self-government, the responsibilities of citizenship, and the benefits of democracy in general. Such efforts may convince Russians to support reformist parties’ message, complementing ongoing NDI and IRI efforts to improve organization and campaign techniques of these parties. Further, many participants told us that NDI’s and IRI’s civic advocacy seminars provided them with information on creating coalitions of civic organizations and attracting people, particularly women, to social movements that could influence government. However, they also told us that the United States could better support civic groups by helping them address issues of broader social concern such as crime, drugs in school, and women’s unemployment. According to IRI officials, the goal of IRI’s civic advocacy program is to help these groups see the importance of being involved in the political process. However, while IRI sponsors political events such as candidate debates or women-in-politics seminars, it does not sponsor events on local civic issues such as crime or drugs in school. According to IRI, an indicator of its success in the civic advocacy area is that many of its trainees become candidates for national and local offices. For example, following a February 1994 women-in-politics seminar, four women decided to run for the City Duma in their home town—three won. NDI officials told us that their civic advocacy programs have promoted coalitions among civic groups and enhanced communication between these groups and political parties and local governments. For example, in preparation for the December 1995 parliamentary elections, NDI conducted programs in Moscow, St. Petersburg, Yekaterinberg, and Nizhnii Novgorod on ways that civic groups could voice their interests, such as through sponsoring candidate forums and debates, distributing candidate questionnaires, and providing volunteers and resources to campaigns. For purposes of our review, however, we included these activities as political party development. We were told by Russian political activists that many organizations participating in these civic advocacy programs served as political bases/organizations for local politicians who were running for office, not as traditional civic organizations. According to one of these activists, he decided to use civic organizations as a political base when he saw that the Russian public has an “allergy” to any organization characterized as a “party.” According to USAID officials in Moscow, civic education in schools is the one area where the USAID democracy portfolio is lacking but such a program would be very costly or too diffuse in a country as large as Russia and could offend Russian nationalist sensitivities. Instead, USAID is funding informal civic education activities through nongovernmental organizations. For example, from June 1993 through July 1995 the Eurasia Foundation, a USAID grantee, provided about 100 small grants to U.S. and Russian nongovernmental organizations in the areas of legal reform, conflict resolution, democratic institution building/civic education, and nongovernmental organization development. In addition, USAID is encouraging NDI and IRI to place less emphasis on their party training programs and more on their work with civic organizations. USAID has also started a $5.5 million project that provides funds for the institutional development of Russian nongovernmental organizations. NED and one of its core institutes, the Center for International Private Enterprise, are also funding informal, and to a lesser extent formal, civic education activities. From fiscal years 1990 through 1994, almost all of NED’s discretionary grants funded nongovernmental organizations in the areas of human rights, civic education, public advocacy, and independent media. For example, in 1994, NED sponsored an international conference in Russia on civic education and financially supported the publication of weekly articles for civics instructors in a leading Russian teachers’ newspaper. The Center for International Private Enterprise also gave a small grant that was used to develop a civic textbook on economic and democratic reform. U.S.-funded rule of law activities conducted under the Democratic Pluralism Initiative thus far have had a limited impact on reforming Russia’s legal and judicial institutions and are only beginning to help build a grassroots constituency for legal reform. Through grants to the State Department and the American Bar Association, USAID supported Russia’s limited reintroduction of jury trials and its first steps toward establishing an independent judiciary. In September 1993, USAID awarded a $12.2 million, comprehensive rule of law contract to ARD/Checchi. This contract was designed to continue these efforts in Russia over a 3-year period, and expand them to strengthen the laws, legal structures, and civic organizations that provide the necessary operating framework for democratic, market-oriented societies. Specifically, ARD/Checchi provided funds for an assessment of Russian legal needs and used this information to develop an action plan for USAID’s rule of law project. ARD/Checchi also designed and started to implement activities that would support the development of Russia’s legal institutions, and it awarded subcontracts and subgrants to nongovernmental organizations for voter education prior to the December 1993 elections, legal assistance for trade unions, and the development of civic organizations. This project’s core program was not implemented for about a year due to problems related to the process of designing the program and poor contractor performance. Two USAID funded projects contributed to incremental changes in the Russian criminal justice system and judicial institutions from 1992 through 1994. With USAID funding, the State Department and the American Bar Association implemented two small projects that were designed to help increase the independence of the judiciary and support Russia’s reintroduction of jury trials. The reintroduction of jury trials in Russia is a major reform initiative, both substantively and symbolically. Russian legal reformers hoped that the reintroduction of jury trials would lead to a more open and fair adversarial courtroom procedure. Jury trials would replace the Soviet-style system in which, according to State Department human rights reports, criminal procedures are still weighted heavily in favor of the prosecution, and defendants are expected to prove their innocence rather than the prosecutors prove their guilt. Beginning in May 1992, the U.S. embassy’s political office used USAID funds to support a Russian-sponsored jury trial initiative and establish contacts with Russian legal reformers. Under two agreements with USAID, the State Department received $200,000 for rule-of-law activities in Russia. Using these funds, the U.S. embassy’s political office provided funds for seminars, including one held in 1994 at which U.S. and Russian experts evaluated the preliminary results of the jury trial initiative and discussed future steps in U.S.-Russian cooperation. The office also provided travel funds for experts who would design publicity materials associated with jury trials and the new Russian constitution. According to USAID officials, the State Department’s small project was not designed to be long running or sustainable. Instead, it was designed to act as a bridge and establish contacts for a larger USAID project. USAID officials told us that the early years of State’s project were very successful but that the activity was no longer needed. USAID stopped funding this activity in March 1995. Beginning in mid-1992, the American Bar Association provided technical assistance for Russia’s judicial restructuring and reintroduction of jury trials. The American Bar Association operated under a 2-year regional grant that totaled about $3.2 million, of which about $950,000 was used for assistance to Russia. Activities included holding three training workshops, held in Russia and Washington, D.C., that covered judicial restructuring, constitutional reform, and jury trial advocacy for criminal defense attorneys; providing immediate assistance in circulating and commenting on 12 draft laws within the United States, including the draft labor code, draft constitution, and draft law on state support of small business; giving equipment to Russian legal institutions; hosting exchange visits between Russian and American judges; and developing a bench book to guide judges during jury trials. By the end of 1994, jury trials were operating in 9 of 89 regions in Russia,and the government had enacted legislation intended to increase the independence of the judiciary. However, although the former Supreme Soviet and the present Parliament, with the active encouragement of the President’s staff, enacted many legal reforms through 1994, both the Russian and regional governments did not adequately fund their implementation. As a result, the widespread reintroduction of adversarial trials with juries was not occurring as scheduled because many court rooms had not been renovated, many judges had not received necessary training, and funds were not available to pay for jurors’ stipends. Despite the government’s long-term efforts to reform the judiciary, at the end of 1994 judges were just beginning to assert their independence from other branches of government. By September 1995, expansion of the jury trial initiative or further improvements in the criminal justice system appeared to have minimal support from the Russian government. According to the State Department’s human rights report, the limited progress that Russia had made was undercut by two decrees issued by the President of the Russian Federation in June 1994. In his desire to combat increasing crime, President Yeltsin signed two decrees that contradicted constitutional rights to protection against arbitrary arrest and illegal search, seizure, and detention. Further, according to a USAID official, the Russian government did not fund the expansion of jury trials to the planned five additional regions. Moreover, the Russian government official that was pushing reforms in the criminal justice system left the government in late summer of 1995. Although the ARD/Checchi contract funded many projects, its primary focus was to strengthen core Russian legal institutions. The contract was to include judicial training programs; law school support, including adding commercial law courses and new substantive and procedural code reforms into the curriculum; legal information programs; public and professional legal education; support for the Constitutional Court; and training for the procuracy, which in Russia includes the functions of prosecutor, investigator, attorney general, ombudsman, and consumer affairs. ARD/Checchi was also to have assumed primary responsibility for supporting the reintroduction of jury trials. Our review showed that the contractor’s efforts in these core areas had little impact during the first year because of problems related to the interagency approval process for the contractor’s work plan, the complexity and enormity of the contractor’s tasks, and poor contractor performance. ARD/Checchi took about a year to start implementing its core legal reform activities as finally approved by the USAID mission. ARD/Checchi required several attempts to draft an action plan that was acceptable to USAID and the U.S. embassy interagency working group on the rule of law. According to a USAID official, the interagency working group contributed to the delay as it did not have a clear idea of what it expected from ARD/Checchi. ARD/Checchi’s progress was further slowed by its organizational and personnel problems and unfamiliarity with USAID’s contract, procurement, and program requirements. According to USAID officials, ARD/Checchi’s assessment team did an excellent job analyzing Russia’s legal situation and identifying key institutions and officials; however, the contractor was ineffective in translating that information into deliverable assistance during the first year. The ARD/Checchi project was also hampered by limited support from the USAID mission in Moscow, which was struggling to implement the entire Russian assistance program and was preoccupied with the December 1993 parliamentary elections. According to USAID officials, the mission was understaffed during the initial program phase and had little technical expertise to manage such a complex contract. Thus, ARD/Checchi, as well as other contractors, was largely left to its own devices to implement its projects. USAID officials told us that during the first year, USAID was preoccupied with assisting the Russian State Duma on commercial law activities and trying to manage the approximately 200 contractors and grantees starting work on USAID programs. As a result, USAID was unable to provide effective oversight and assistance to the contractors at the start of the projects. A complicating factor for the rule of law program in general, and ARD/Checchi in particular, was the need to forge working relationships with Russia’s historically closed legal institutions. Although the U.S. embassy’s political office had established contacts within the presidential administration, ARD/Checchi spent most of its time during its first year establishing contacts with other legal institutions such as the Academy of Jurisprudence, the Supreme Commercial Court, the Procuracy Training Institute, state law academies, leading Russian law schools, and the Constitutional Court. ARD/Checchi officials told us that identifying the key administrators and reformers and establishing effective working relationships within institutions was a complex and time-consuming task. Further, according to a USAID official, ARD/Checchi spent a good deal of time negotiating subcontracts with organizations unfamiliar with having a subcontractor relationship with USAID. The USAID mission in Moscow attributed the delay in developing an action plan to (1) the preoccupation of government counterparts during the government crisis during the fall of 1993, (2) the difficulty in designing programs for nonreformed Russian government institutions, and (3) the lack of experience of ARD/Checchi’s first chief of party in project management. The USAID mission believes that the interagency approval process did not contribute to delays in the project. We noted a significant increase in activity under ARD/Checchi’s work plan starting in the last quarter of 1994 through the first half of 1995. After a change in the management of ARD/Checchi’s Moscow office management in late 1994, ARD/Checchi began to provide training programs, equipment, and reference materials to Russia’s core legal institutions. For example, it provided training to Supreme Commercial Court senior faculty by faculty of the National Judicial College in Nevada; case management and computer training, reference materials, and equipment to the Commercial Court to meet its expanding caseload; training programs on bench trials and judicial ethics; curriculum expansion, information system modernization, and trial advocacy workshops at Russia’s first rank law schools; training and computer hardware and software to the St. Petersburg State University Law Faculty in the use of legal database and electronic mail to promote the flow of legal information to the legal community; educational films for judges, jurors, and the public on jury trials and the construction of a mock court room for the training of judges from general jurisdiction courts; and training programs for senior level trainers and teaching equipment upgrades at the Procuracy Training Institute. In August 1994, USAID awarded the American Bar Association a $2.5 million, 2-year grant, of which $700,000 is budgeted for its project in Russia. Under this grant, the bar association is assisting Russian lawyers’ associations in strengthening their institutions, establishing new associations, and developing continuing legal education programs. It is too soon to evaluate the effectiveness of ARD/Checchi’s core legal reform efforts and the latest American Bar Association project since they had only started in late 1994 and early 1995. However, USAID, State Department, and Russian government officials told us that systemic changes in Russia’s legal institutions will be a long-term process. ARD/Checchi’s contract also included a component designed to encourage grassroots efforts to promote the rule of law. In early 1995, ARD/Checchi started a $2-million small grants program, which will provide small grants to Russian organizations and their U.S. partner organizations that are pursuing legal reform or providing legal services. ARD/Checchi had not awarded any small grants at the time of our fieldwork in Russia, but in March 1995, ARD/Checchi awarded five grants (totaling $475,000) in the areas of environmental law, community legal assistance and legal education, tax law reform, women’s rights, and freedom of information. Further, as part of its USAID-funded needs assessment for the rule of law area, ARD/Checchi identified the potential and recommended the funding for a legal assistance/workers’ rights project. In August 1994, ARD/Checchi awarded a subcontract of about $465,000 to the Free Trade Union Institute for this project. Through this subcontract, USAID’s rule of law project has financially supported increasingly effective efforts to address workers’ rights issues through Russia’s court system. (See app. IV for more information on this project.) USAID, through the Eurasia Foundation and a nongovernmental development project, and NED have also provided grants to human rights and other nongovernmental organizations. These grants directly and indirectly contribute to the rule of law program by developing long term relationships with Russian grassroots organizations that are working to increase transparency and accountability in government and influence the reform process by safeguarding human rights and the right to political dissent. U.S. assistance to strengthen civilian control of the Russian military has included the International Military Education and Training (IMET) program and a USAID grant to the Atlantic Council. Neither program has had much impact, primarily because they have not affected significant numbers of Russian decisionmakers due to a lack of interest by the Russian government. U.S. embassy officials told us that the Russian military, rather than civilians, has retained firm control of its sphere of operations. One official said that the Russian Parliament has limited detailed knowledge of the military budget and has to rely on the intelligence services to learn information of military activities. Similarly, uniformed officials are predominant at the Russian Ministry of Defense. The U.S. embassy officials said that political circumstances in Russia make the implementation of a U.S. civil-military program in Russia very difficult. Some quarters of the government are generally reluctant to accept Western assistance and suspect that civil-military assistance is designed to further weaken Russia militarily. Additionally, the need for deep cuts in defense spending renders the process of greater civilian control in the Ministry of Defense more complex, as significant hiring of civilian employees is unlikely, especially in light of large numbers of unemployed military personnel. In June 1992, the Department of Defense (DOD) began implementing an IMET program in Russia, a program which is jointly managed by the State Department and DOD. The IMET program is a world wide grant training program that, among other objectives, seeks to promote military rapport between the United States and foreign countries and promote better understanding of the United States, including its people, political system, and institutions. In Russia, the program aims to foster a stable, cooperative relationship between U.S. and Russian armed forces and provide expertise to guide the military’s transition under a democratically elected government. Under the Expanded-IMET component, the program also seeks to promote civilian control of the military and democratic orientation of the military along Western lines. Funding for the IMET program in Russia has grown from $153,000 in fiscal year 1992 to $471,000 in fiscal year 1994. According to a DOD official, about one-third of the total for these years was spent on Expanded IMET courses. From 1992 through 1994, according to information provided by the U.S. embassy, the IMET program for Russia brought 18 mid- and senior-level military officers from the Ministry of Defense and 19 civilian officials, primarily from the Ministry of Foreign Affairs, to the United States for education, training, and observation tours. The military officers generally attended mid- and senior-level military colleges or participated in observation tours, and all but one civilian official attended defense resource management courses in Monterey, California. According to U.S. embassy and DOD officials, Russia’s Ministry of Defense has not fully used the IMET program since 1993, sending few military officers to the United States for training in 1994. The ministry generally will not allow any Russian officer to study at a given location alone, which limits Russia’s participation at U.S. military colleges. According to a DOD official, the Secretary of Defense recently encouraged the Russian Minister of Defense to increase Russia’s military participation in the IMET program, but the Russian government has not responded to this encouragement. According to an IMET program document, the Ministry of Foreign Affairs has shown much greater support for the IMET program. However, according to a U.S. embassy official who manages the program, the ministry thus far appears to have nominated civilian candidates who are chiefly mid-level bureaucrats and not likely to advance to positions of authority. An embassy official told us that it is too early to determine whether the IMET program is successful and that the embassy views the program as a long-term effort that may not yield results for 10 to 20 years. In commenting on this report, the State Department stated that there has not been enough time to track the careers of civilians who participated in the expanded IMET program, and DOD also emphasized that the IMET program is a long-term effort. DOD said that the IMET program has not had sufficient time to make an impact. We agree with DOD that the IMET program is a long-term effort; however, we assessed the progress that had been made in identifying and selecting promising officers who are likely to rise to positions of prominence. We found that the major factor inhibiting this process was the unwillingness of the Russian government to fully use the IMET program. The Atlantic Council received a 2-year $626,500 grant from USAID in 1992 for a civil-military relations project. The project’s goal was to encourage the integration of the Russian military establishment into society, opening it up to greater supervision from, and closer working relationships with, democratically elected civilian leadership of the executive and legislative branches and with the press and public at large. The council intended to conduct a series of training seminars in both Russia and the United States. According to a USAID-funded evaluation, the program suffered delays from the outset and failed to fulfill its planned activities due to poor planning, lack of in-country staff to process potential participants, tight timelines, and an underestimation of Russian political sensitivities. During the first year of the grant, the council conducted a 2-day seminar in Russia on the U.S. defense budget process. In the second year, the council sponsored or cosponsored four seminars in Russia, including (1) a journalism seminar on covering defense issues in a democratic society, which was cosponsored with the Russian-American Press and Information Center and attended by journalists from Russia, Ukraine, and Belarus and (2) seminars on national security decision-making, civil-military relations, and the Partnership for Peace program for Russian government officials. The following are GAO’s comments on USAID’s letter dated December 7, 1995. 1. The agency’s suggested technical corrections have been incorporated in the report where appropriate. The following are GAO’s comments on USIA’s letter dated December 6, 1995. 1. The agency had not provided these figures as of February 14, 1996. 2. As stated in our draft report, we did not review the effectiveness of democracy-related USIA and USAID exchange and visitor programs due to the difficult and time-consuming task of locating individual program participants. 3. Our report presents the cost and results of this project. 4. USIA’s Bureau of Broadcasting was outside the scope of our review. The following are GAO’s comments on NED’s letter dated November 28, 1995. 1. The primary focus of the center’s projects was to promote privatization and market reform, two areas outside the scope of our review. 2. Technical corrections and wording changes offered by NED were incorporated in the report text where appropriate. The following are GAO’s comments on IRI’s letter dated November 22, 1995. 1. We visited seven cities in Russia, including two of IRI’s target cities (St. Petersburg and Novosibirsk). In Moscow and other cities, we met with political activists who had attended programs in other IRI target cities as well. 2. IRI examples have been incorporated into the report. However, while IRI’s efforts may have helped some candidates win in local elections, its project thus far has been unsuccessful at its primary objective of developing reformist political parties. In contesting the 1995 parliamentary elections, the reformist parties again failed to form either a national coalition or national party structures. 3. IRI examples have been incorporated into the report. At best, however, IRI has had mixed results in getting Russians to use its campaign techniques. 4. We have deleted from our report the discussion on IRI’s efforts to make its program sustainable. 5. During our discussion with CEC officials, including the Vice Chairman, they did not mention IRI’s observer report as making a significant contribution to improving Russia’s election law. 6. We have modified the report to reflect IRI’s interpretation that its program will have an impact at the margins. The following are GAO’s comments on NDI’s letter dated November 27, 1995. 1. The report has been modified based on updated USAID financial information. 2. The draft report stated that the $200,000 was for election monitoring and sending Russian party leaders to the United States for training. 3. NDI examples have been incorporated into the report. At best, however, NDI has had mixed results in getting Russians to use its campaign techniques. 4. Despite these coordination efforts, the evidence obtained during our review suggests that successful coalition building had not taken place at the national level. For example, due to personality conflicts, two of the largest proreform political parties, Yabloko and Russia’s Choice, had split into 11 different parties and movements by the December 1995 parliamentary elections. 5. The result of this training, when measured against performance of democratic reformist parties during the 1995 parliamentary elections, must be considered a major disappointment. Reformist political parties neither formed a national coalition or a national party structure. In addition, reformist parties apparently did not benefit from NDI’s training. For example, the Democratic Choice of Russia—the leading proreform party in the 1993 election and an NDI client—failed to reach the 5 percent threshold for gaining party representation in the Parliament and the 1993 election in the State Duma. 6. Many organizations participating in the civic advocacy programs actually serve as a political base/organization for local politicians who are running for office, rather than as traditional civic organizations. Thus, we continue to view these programs as political party development. 7. While interest in political party training continues to exist, the effectiveness of such training in the current political environment is questionable. Many Russian political activists took the longer term view that civic education would make a more important contribution to promoting democracy in Russia. 8. Although the outcome of elections should not be held as the sole indicator, it is one indicator to assess the impact of political party development assistance. Unless parties are successful at increasing political representation, they are unlikely to attract the necessary financial and public support to grow and prosper. 9. In measuring party development, we believe it is appropriate to emphasize party performance over individual candidate performance. Moreover, in the December 1995 parliamentary elections, the performance of reformist parties at both the party and individual level was again disappointing. 10. We have modified the report to reflect NDI’s interpretation that its program will have an impact “on the margins.” 11. We have deleted from our report the discussion on NDI’s efforts to make its program sustainable. Louis H. Zanardi Judith A. McCloskey Patrick A. Dickriede Todd M. Appel Jose M. Pena, III The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed U.S.-funded democracy programs of the Agency for International Development (AID), U.S. Information Agency (USIA), Department of State, and the Department of Defense, focusing on whether democracy programs in Russia were meeting their developmental goals and contributing to political reform from fiscal years 1990 through 1994. GAO found that: (1) U.S.-funded democracy projects have demonstrated support for and contributed to Russia's democracy movement; (2) organizations and institutions at the center of the democratic reform process have been identified and supported, as have thousands of Russian activists working at these organizations at the national, regional, and local levels; (3) those assisted include prodemocracy political activists and political parties, proreform trade unions, court systems, legal academies, officials throughout the government, and members of the media; (4) the democracy projects that GAO reviewed, however, had mixed results in meeting their stated developmental objectives; (5) Russian reformers and others saw U.S. democracy assistance as generally valuable, but in only three of the six areas GAO reviewed did projects contribute to significant changes in Russia's political, legal, or social system; (6) AID and USIA media projects largely met their objective of increasing the quality and self-sufficiency of nongovernment or independent media organizations, although the weak economy continues to threaten the sustainability of an independent media; (7) U.S. efforts to help develop a democratic trade union movement and improve Russia's electoral system also contributed to systemic changes, although more needs to be done; (8) however, projects in the areas of political party development, rule of law, and civil-military relations have had limited impact; (9) GAO's analysis indicated that the most important factors determining project impact were Russian economic and political conditions; (10) project implementation problems contributed to the limited results achieved from the rule of law project; and (11) State and AID officials acknowledged that democratic reforms in Russia may take longer to achieve than they initially anticipated.
|
Federal regulations set requirements for a small business to qualify as an SDVOSB. SDVOSB eligibility regulations mandate that a firm must be a small business and at least 51 percent-owned by one or more service- disabled veterans who control the management and daily business operations of the firm. Federal statutes and the Federal Acquisition Regulations (FAR) require all prospective contractors to update the ORCA to state whether their firm qualifies as an SDVOSB. Additionally, the SDVOSB, as a contractor, is required to register in CCR. Contracting officials are required to check CCR, which includes information such as a firm’s status as an SDVOSB, prior to awarding most federal contracts, including an SDVOSB set-aside or sole-source contract. Once an SDVOSB receives a contract, SDVOSB regulations also place restrictions on the amount of work that can be subcontracted. Once CVE verifies a business, it sends an approval letter to the firm. Under regulations first promulgated in 2008, firms retained their eligibility status for 1 year from the date of the letter. However, on June 27, 2012, VA issued updated regulations extending the eligibility period to 2 years before reverification is required. SDVOSB status are required by law to be debarred from contracting with VA for a reasonable period of time, as determined by VA. Additionally, VA regulations state that if a firm or owner is currently debarred or suspended, or is delinquent or in default on significant financial obligations owed to the federal government, then the firm or owner is ineligible for VA’s VetBiz verification program. Federal law has established government-wide goals for specific types of small businesses to receive a percentage of the total value of all prime- contract and subcontract awards for each fiscal year. The statutorily- of all mandated goal for SDVOSB participation is not less than 3 percent federal contract dollars awarded each fiscal year. SBA stated in its most recent report that, in fiscal year 2010, $10.8 billion in small-business obligations were awarded to firms that self-certified themselves in the CCR as SDVOSBs. DOD SDVOSB contracts accounted for $5.3 billion or 49 percent of government-wide SDVOSB contracts during fiscal year 2010, and VA SDVOSB contracts accounted for $3.2 billion, or 30 percent during the same period. Figure 1 summarizes the federal contracts awarded in fiscal year 2010 by federal agencies. Since 2009, GAO has issued nine reports or testimonies on the SDVOSB program, focusing on its vulnerability to fraud and abuse, and agencies’ actions to prevent contracts from going to firms that misrepresent themselves as SDVOSBs. When discussing the SDVOSB program, we have shown that a well-designed fraud-prevention system should consist of three crucial elements: (1) up-front preventive controls, (2) detection and monitoring, and (3) investigations and prosecutions. Figure 2 below outlines the key aspects of an effective fraud-prevention framework. The most effective and most efficient part of a fraud-prevention framework involves the institution of rigorous controls at the beginning of the process. At a minimum, preventive controls for the SDVOSB program should be designed to verify that a firm seeking SDVOSB status is eligible for the program. Even with effective prevention controls, there is residual risk that firms that appeared to meet SDVOSB program requirements initially will violate program rules once they obtain contracts. This fact makes effective monitoring and detection controls essential in a robust fraud-prevention framework. Detection and monitoring efforts include activities such as periodic reviews of suspicious firms and evaluating firms to provide reasonable assurance that they continue to meet program requirements. Finally, fraud-prevention controls are not fully effective unless identified fraud is aggressively prosecuted or companies are suspended, debarred, or otherwise held accountable, or both. VA has made numerous conflicting statements about its progress verifying firms listed in VetBiz under the more-thorough process the agency implemented in response to the 2010 Act. These statements indicate that VA has taken an inconsistent approach to prioritizing the verification of firms and has been unable to accurately track the status of its efforts. Specifically, at the close of our audit work, documentation provided by VA indicated that thousands of SDVOSBs listed as eligible in VetBiz received millions of dollars in SDVOSB sole-source and set-aside contract obligations even though they had not been verified under the more-thorough process implemented in response to the 2010 Act. At that time, VA told us it planned to remove all firms that had their 1-year verification period expired and had not provided documentation for reverification under the 2010 Act process. Since then, on June 27, 2012, VA implemented an interim final rule that extends the eligibility of verified firms to 2 years, including firms for which the eligibility period had expired but that had not yet been reverified. Extending the eligibility period may allow VA to focus its efforts on more thoroughly verifying firms that were previously verified under VA’s less-stringent 2006 Act process. However, the extension also allows thousands of firms to continue to be eligible for contracts even though they have not undergone the more-thorough process. With regard to our previous work, VA has taken some positive action to enhance its fraud prevention efforts by establishing processes in response to 6 of 13 recommendations we issued in October 2011. VA has also begun action on some remaining recommendations. VA has provided a number of conflicting statements and explanations related to the status of its verification program, indicating that it is having difficulty tracking its inventory of firms and whether they were verified under the process implemented to carry out the 2010 Act. As we previously stated, the process VA implemented to review firms under the 2006 Act consisted of checking whether a firm’s owner was listed in VA’s database of service-disabled veterans and conducting searches on publicly available websites such as the EPLS, which lists firms that have been debarred from doing business with the federal government. In contrast, VA stated that it implemented a more-thorough verification process under the 2010 Act that included unannounced and announced site visits and a review and analysis of company documentation. Although the 2010 Act did not include a date by which VA must complete the verification of firms, within 60 days of the law’s enactment VA was required to notify all unverified firms listed in its VetBiz database about the need to apply for verification by submitting documents to establish veteran ownership and control. Firms were required to do so within 90 days of receipt of the notification in order to avoid removal of the firm from VetBiz. VA officials told us that the agency prioritized its verification under the process implemented in response to the 2010 Act by reviewing (1) new applications for firms that had previously only self-certified in VetBiz (i.e., firms that had not been reviewed under the processes VA created for the 2006 Act or 2010 Act); (2) new firms that had initially applied for verification after the 2010 Act, to include reprocessing any firms that were denied through the new requirements and subsequently requested reconsideration; and (3) applications for firms initially verified in VetBiz under the process VA chose to implement for the 2006 Act. However, our review of information provided by VA raises concerns about the status of this process and whether VA knows how many of its firms have actually been verified under the processes implemented in response to the 2010 Act. In one communication, VA stated that as of February 2011, VA’s 2006 Act verification process had been discontinued, and all new verifications would use the process implemented in response to the 2010 Act going forward. Because firms would need to reverify 1 year later, this meant that only firms verified under the 2010 Act process should have been in VetBiz as of February 2012. In November 2011, VA reported that it had removed all unverified firms from its database on September 4, 2011. Subsequently, while reviewing new cases involving firms that had received VA SDVOSB contracts, we found instances where firms were not verified under VA’s 2010 Act process, but rather were verified under its 2006 Act process. When we met with VA in February 2012 to discuss our new cases, officials confirmed that there were still firms in VetBiz that had not been through the processes implemented in response to the 2010 Act, but did not explain how many firms still had not gone through the new process. Then, on April 23, 2012, officials told us that they had recently removed thousands of firms from VetBiz because these firms had not supplied the supporting documentation that VA decided was required for verification under the process implemented in response to the 2010 Act; VA indicated that it planned to remove hundreds of additional firms for the same reason. VA has provided conflicting statements about whether these firms received the December 2010 request to supply documentation. Further, over the next month, VA officials provided us with at least seven differing accounts of the number of SDVOSBs verified under the processes implemented for the 2006 Act and 2010 Act, the number of SDVOSBs they planned to remove, and the timing of the removals. VA’s conflicting statements create uncertainty about the status of the agency’s efforts to verify firms under the process implemented for the 2010 Act. Without a clear inventory and methods designed to track the verification process firms have undergone, VA cannot provide reasonable assurance that all firms appearing in VetBiz have been verified as owned and controlled by a veteran or service-disabled veteran. In its agency comments, VA explained these inventory issues by noting (1) the lack of a comprehensive case-management system has created the need for aggregate workarounds and resulted in inconsistent aggregate reporting; (2) the limitations of its current case-management system make it difficult to track the inventory of firms; and (3) as the limitations of the case-management system increase over time, the potential of CVE to lose track of how many firms have been verified also increases. VA also noted that its verification priorities have evolved over time. As of the close of our audit work, the information provided by VA indicated that thousands of potentially ineligible firms remain listed in VetBiz because they have not been verified under the more thorough process implemented for the 2010 Act. Our analysis shows that as of April 1, 2012, 3,717 of the 6,178 SDVOSBs (60 percent) listed as eligible in VetBiz had yet to be verified using the more-thorough verification process. Of these 3,717 firms listed as eligible on April 1, 2012, 134 received a total of $90 million in new VA SDVOSB sole-source or set- aside contract obligations during the 4-month period from November 30, 2011, to April 1, 2012. On May 14, 2012, VA told us that it removed 1,857 of these 3,717 SDVOSBs from April 2 to April 10, 2012, so that they are no longer eligible for VA SDVOSB sole-source and set-aside contracts. According to VA, the remaining 1,860 firms that had not received a review under the 2010 Act process were projected to be removed in July 2012 unless the firms provided adequate documentation supporting their eligibility. VA also stated that these firms were identified as being in “reverification” and no such expired firm was eligible for an actual contract award until the reverification decision had been completed. Since then, on June 27, VA implemented an interim final rule that extends the eligibility of verified firms to 2 years. VA told us it interprets “verified” to include any firm that has been verified under either its 2006 or 2010 Act processes. Therefore, according to the interim rule, as long as a firm is verified under either process and is in its 2-year eligibility period, VA is only authorized to initiate a verification examination if it receives credible evidence calling into question a participant’s eligibility. Furthermore, VA considered firms whose prior 1-year eligibility period had recently expired, but who had not yet been through reverification, to be within the scope of the new rule, thus extending their eligibility another year. Extending the eligibility period may allow VA to focus its efforts on more thoroughly verifying firms that were previously verified under its less-stringent 2006 Act process. However, the extension also allows thousands of firms to continue to be eligible for contracts even though they have not undergone the more-thorough process. For example, according to information provided by VA in its comments, as of July 13, 2012, there are 6,079 SDVOSBs and VOSBs listed in VetBiz. Of these, 3,724 were verified under the more-through process implemented under the 2010 Act and 2,355—over 38 percent--were verified under VA’s less-rigorous 2006 Act process. As VA acknowledges in its agency comments, “the retention of firms verified prior to the 2010 Act increases the possibility awards will go to firms that will not be verified when the more rigorous process is applied.” Moreover, past audits show the risk of providing SDVOSB contracts to firms reviewed under VA’s 2006 Act process. For example, in 2011, VA’s own OIG issued a report that reviewed both SDVOSBs and VOSBs listed in VetBiz and found that 10 of 14 SDVOSBs and VOSBs verified under VA’s 2006 Act process and listed as eligible were in fact ineligible for these respective programs. The report identified several reasons for why these firms were ineligible, including improper subcontracting practices, lack of control and ownership, and improper use of SDVOSB status, among others. Further, the report noted VA’s document-review process under the 2006 Act “in many cases was insufficient to establish control and ownership… in effect allowed businesses to self-certify as a veteran-owned or service-disabled veteran-owned small business with little supporting documentation.” The report goes on to state that VA’s failure to maintain “accurate and current” information in the VetBiz database also exacerbated problems in the verification process. VA’s OIG also used statistical sampling methods to project that (1) $500 million of VA SDVOSB and VOSB contracts were awarded annually to ineligible firms and (2) VA will award about $2.5 billion in SDVOSB and VOSB contracts to ineligible firms over the next 5 years if it does not strengthen its oversight and verification procedures. In October 2011, we issued 13 recommendations to VA related to vulnerabilities in the verification process implemented by VA after the 2010 Act; VA generally concurred with our recommendations. As of June 2012, VA has provided us with documentation demonstrating that it has established procedures in response to 6 of these recommendations. Figure 3 shows the status of the recommendations; more specific information on each recommendation follows the figure. We have not assessed the effectiveness of any of the procedures that VA has established thus far as this is beyond the scope of this report. VA has provided additional guidance and training to the VA contracting personnel on the use of the VetBiz website. In December 2011, VA issued a guidance memo requiring VA contracting personnel to check VetBiz to ensure that a firm is verified both upon receipt of an offer and prior to award. In November 2011, VA also provided training to the contracting personnel on the use of VetBiz. Providing guidance and training to current and new contracting personnel will help to ensure that these staff are aware of the need to check VetBiz prior to awarding a contract. VA has established formal procedures for VA staff to refer suspicious applications to the OIG and provided guidance on what type of cases to refer to the OIG. In April 2012, VA issued procedures for VA staff to use if they identify suspicious information or possible misrepresentations on an application for eligibility during their initial review process. These procedures contain step-by-step instructions for how to notify the OIG about suspicious applications. Specifically, CVE’s “risk team” makes a determination as to whether or not an applicant has intentionally misrepresented its status in an apparent attempt to defraud the government. If the information is credible, the applicant is referred to the VA OIG. If VA OIG accepts the referral, it conducts preliminary inquiries to determine whether a full investigation into criminal activity is warranted. If the OIG declines the investigation, VA can refer the matter to VA’s Debarment Committee, which VA instituted in September 2010 specifically to debar firms that had violated SDVOSB regulations. In addition to these procedures, from November 2011 through January 2012, VA provided three training sessions to the VA staff on the type of red flags to note during the application review. VA has explored the feasibility of validating applicants’ information with third parties. In 2012, VA met with Dun and Bradstreet to explore the feasibility of utilizing their services to validate applicants’ information, such as names and titles of business owners. Validating applicants’ information with third parties may help enhance VA’s ability to assess the accuracy of self-reported information. VA has formalized a process for conducting unannounced site visits to firms identified as high-risk during the verification process. In June 2012, VA issued procedures for VA to conduct unannounced site visits on a sample of 50 percent of high-risk firms identified during the verification process. Formalizing this process with a focus on high-risk firms may help provide reasonable assurance that only eligible firms gained access to the VetBiz database. VA has developed and implemented a process for unannounced site visits to verified companies to obtain greater effectiveness and consistency in the verification process. VA’s aforementioned June 2012 procedures also apply to verified companies. VA developed a process to select on a weekly basis, based on a combination of random and risk-based factors, verified firms to receive an unannounced site visit. In addition, according to VA it has started making these unannounced site visits. Conducting these site visits may help provide reasonable assurance to VA that the verification process is effective. VA has developed and implemented specific procedures and criteria for staff to make referrals to the Debarment Committee and VA OIG as a result of misrepresentations identified during initial verification and periodic reviews. VA’s aforementioned April 2012 procedures also apply to false information or misrepresentations identified after VA’s initial review of the application, during the firm’s eligibility period. These procedures may increase VA’s success in pursuing firms that have misrepresented their eligibility for the program. VA has not provided regular fraud-awareness training to CVE and VA contracting personnel. One of the most significant challenges to an effective verification program is to have sufficient human capital with proper training and experience. Although VA has not established regular fraud-awareness training, it has made progress in this area. For example, VA told us that its OIG recently provided training on procurement fraud and that its General Counsel provides weekly training on examination procedures and policies in order to educate staff on fraud prevention. In addition, VA said that it has plans to require all CVE staff to attend a fraud examiners course; several CVE staff were already scheduled to attend fraud training in July 2012. Having sufficient human capital with the proper training and experience would enhance the effectiveness of the verification program. VA has not developed and implemented procedures for conducting unannounced site visits to contract performance locations and interviews with contracting officials to better assess whether verified companies comply with program rules after verification. VA has started conducting announced site visits as part of its subcontracting compliance review program. This program is used to determine if a firm is performing in accordance with percentage of work performance requirements and other subcontracting commitments. However, VA has not developed and implemented procedures for conducting unannounced site visits to contract performance locations and interviews with contracting officials. The unannounced site visits and interviews with contracting officials would allow VA to better assess whether verified firms comply with program rules after verification. VA has not developed procedures for risk-based periodic reviews of verified firms receiving contracts to assess compliance with North American Industry Classification System (NAICS) size standards and SDVOSB program rules. In order to be eligible for SDVOSB set-aside and sole-source contracts, a firm must qualify as a small business under NAICS size standards. In draft guidelines, VA included supplemental information for VA staff to review the firm’s NAICS codes size standards, but these guidelines have yet to be finalized. Moreover, the draft guidelines do not include procedures for periodic reviews of verified firms’ compliance with these standards. Such procedures would help improve continued compliance with SDVOSB program rules. VA has not developed and implemented specific processes and criteria for the Debarment Committee on compliance with the requirement in the 2006 Act to debar, for a reasonable period, firms and related parties that misrepresent their SDVOSB status. According to VA, its Debarment Committee relies on procedures outlined in the FAR and the VA Acquisition Regulations to determine the length of debarments. VA has not developed specific guidelines outlining the Debarment Committee’s decision process to debar firms that misrepresent their SDVOSB status. VA should provide the Debarment Committee with guidelines to aid its decision-making process in determining what constitutes a “misrepresentation” deserving of debarment, as that term is used in the 2006 Act. VA has not developed procedures on removing SDVOSB contracts from ineligible firms. According to the VA Acquisition Regulations, the Deputy Senior Procurement Executive has the authority to determine whether VA should terminate a contract with a debarred firm. However, VA has not developed procedures to remove SDVOSB contracts from ineligible firms. According to VA, it is in the process of developing a policy on removing SDVOSB contracts from ineligible firms as determined by status protests. In addition, VA is in the process of providing guidance to the acquisition workforce on removing SDVOSB contracts from ineligible firms. Until VA develops procedures on removing SDVOSB contracts from ineligible firms, the SDVOSB program is at risk for ineligible firms to abuse the program and retain contracts obtained through fraud and abuse. VA has not formalized procedures to advertise debarments and prosecutions. VA has not formalized procedures for advertising debarments and prosecutions, though the Debarment Committee, the OIG, and CVE have listed these actions on their websites. No action has been taken to improve government-wide SDVOSB fraud- prevention controls as the program continues to remain a self-certification program. Because federal law does not require it, SBA does not verify firms’ eligibility status, nor does it require that firms submit supporting documentation. According to SBA, it is only authorized to perform eligibility reviews in a protest situation, including those cases where SBA itself has reason to believe that a firm misrepresented its SDVOSB status. However, without basic checks on firms’ eligibility claims, SBA cannot provide reasonable assurance that legitimate SDVOSBs are receiving government contracts. In fact, five of our new case-study firms received SDVOSB set-aside and sole-source contract obligations, totaling approximately $190 million, of which $75 million were new SDVOSB set- aside and sole-source contract obligations, from October 1, 2009, to December 31, 2011, despite evidence indicating they are ineligible for the program. With regard to our original 10 case-study firms reported in October 2009, some are under investigation by SBA OIG and punitive actions have been taken against others. To address vulnerabilities in the government-wide program, we previously suggested that Congress consider providing VA with the authority necessary to expand its SDVOSB eligibility verification process government-wide. Such an action is supported by the fact that VA maintains the database identifying which individuals are service-disabled veterans and is consistent with VA’s mission of service to veterans. However, such action should not be undertaken until VA demonstrates that its verification process is successful in reducing the SDVOSB program’s vulnerability to fraud and abuse. In our previous work, we found that the SDVOSB program did not have effective government-wide fraud-prevention controls in place and was vulnerable to fraud and abuse.in place for SDVOSB contracting. Because federal law does not require it, SBA and agencies awarding contracts—other than VA—do not have a process in place to validate a firm’s eligibility for the program, and rely on the firms self-certifying as a service-disabled veteran-owned business in CCR. We found the only process in place to detect fraud in the government-wide SDVSOB program involved a formal bid-protest process at SBA, whereby interested parties to a contract award could protest another firm’s SDVOSB eligibility or small-business size. However, we reported that this self-policing process did not prevent ineligible firms from receiving SDVOSB contracts. SBA officials have told us that they have limited responsibility over the SDVOSB program, and that the agency’s only statutory obligation is to report on other agencies’ success in meeting SDVOSB contracting goals. Outside of VA, there was no verification Our new case studies highlight instances of the fraud and abuse that resulted from the lack of verification of firms’ SDVOSB status. In fact, five of our new case-study firms received SDVOSB set-aside and sole-source contract obligations, totaling approximately $190 million from October 1, 2009, to December 31, 2011, despite evidence indicating they are ineligible for the program. Of this $190 million, $75 million were new SDVOSB set-aside and sole-source contract obligations. In four of the cases we examined, we were able to substantiate informants’ allegations of ineligibility as follows: Non-SDVOSB joint venture. An SDVOSB entered a joint venture with a non-SDVOSB firm and received about $16 million in new government-wide SDVOSB set-aside contract obligations. Such joint ventures are eligible if the SDVOSB firm manages the joint venture and the contract work. However, the owner, a service-disabled veteran, admitted to our investigators that his SDVOSB firm did not manage the joint venture. Therefore, the joint venture is ineligible. This firm is currently listed as a SDVOSB in CCR, which allows the firm to compete for government-wide SDVOSB contracts. VA-denied firm. Though VA denied a firm SDVOSB status in 2010 because the firm was not controlled by a service-disabled veteran owner, the firm continued to self-certify in CCR. A VA site visit found the service-disabled veteran worked mostly at another company, and the non-service-disabled veteran vice president controlled the firm. In 2011, when the firm applied for VA verification again, the size of the firm was also questioned as it shared ownership or management with at least four different entities, including companies owned by a non- service-disabled veteran minority owner. The company withdrew its application to be a VA verified SDVOSB. In total, the firm received about $21 million in SDVOSB set-aside and sole-source contracts from DOD, the General Services Administration (GSA), the Department of the Interior (DOI), the U.S. Department of Agriculture and the VA, $16 million of which were new SDVOSB set-aside and sole-source contract obligations. After VA denied the firm, the firm continued to self-certify as a SDVOSB in CCR and GSA, and DOI awarded the firm about $860,000 in new SDVOSB set-aside contracts obligations. This firm is currently listed as a SDVOSB in CCR, which allows the firm to compete for government-wide SDVOSB contracts. Multiple firms not veteran–controlled. A service-disabled veteran and two non-service disabled veteran co-owners owned two firms and a joint venture at the same location. VA found one of the firms ineligible. The operating agreements of two of the firms allowed the two minority owners to control the firms, rather than the service- disabled veteran. Additionally, the joint venture, created by one of the firms, was also ineligible because the service-disabled veteran’s firm did not manage the joint venture and the contract work. Therefore, none of the three firms were eligible for the SDVOSB program. The three firms received over $91 million in SDVOSB set-aside and sole- source contract obligations, about $18 million of which were new SDVOSB set-aside and sole- source contract obligations, from VA and the Department of Health and Human Services. The three firms have been removed from VA VetBiz. However, these firms are currently listed as SDVOSBs in CCR, which allows the firms to compete for government-wide SDVOSB contracts. Not service-disabled veteran–controlled. This firm is ineligible for the SDVOSB program because the veteran does not control the daily operations. The service-disabled veteran was not the Chief Executive Officer, and the firm’s operating agreement did not give the service- disabled veteran the exclusivity to make decisions for the company. In addition, the service-disabled veteran owner lived 500 miles away from the firm, received only $12,000 compared to the non-service- disabled veteran minority owner’s $88,000 salary, and failed to meet or communicate with subcontractors. This firm received about $37 million in SDVOSB set-aside contract obligations, $446,000 of which were new SDVOSB set-aside contract obligations, from DOD and DOI. During the course of our work, SBA and VA found this company ineligible for the SDVOSB program. This firm no longer self-certifies as a SDVOSB in CCR. On May 25, 2012, SBA debarred the non- service-disabled veteran and the firm, making them ineligible for further contracts with the federal government. We were unable to substantiate allegations in a fifth case, but found evidence that the firm in question may be ineligible for the SDVOSB program because the service-disabled veteran owner may not spend sufficient time at the SDVOSB. The service-disabled veteran owner worked as an attorney at a legal services organization Monday through Friday about 40 hours a week, which could prevent the veteran from managing the day-to-day proceedings of the SDVOSB. This firm received about $25 million in new SDVOSB set-aside and sole-source contract obligations from VA and the Department of Transportation. This firm is now listed as verified in VetBiz and is currently listed as a SDVOSB in CCR, which allows the firm to compete for government-wide SDVOSB contracts. The DOD OIG likewise reported that DOD, which awarded about half of government-wide SDVOSB contracts in 2010, did not require adequate verification of contractor status before awarding contracts. After its review of DOD contracts awarded from October 2009 to July 2010, the OIG reported that $1.9 million in SDVOSB contracts went to firms that were not registered in CCR as SDVOSBs and $340.3 million went to contractors that potentially misstated their SDVOSB status. The OIG also found that DOD awarded 12 SDVOSB set-aside and sole-source contracts for a total of $11.5 million to six firms that VA rejected. The OIG went on to recommend that DOD create an SDVOSB verification program, but the agency disagreed, citing an absence of evidence indicating that such a program would produce a net benefit to eligible SDVOSBs, and that Congress had not provided DOD with either the resources or authority to establish such a system. To address the vulnerabilities within the government-wide program caused by reliance on a self-certification process, we suggested in 2009 that Congress consider providing VA with the authority and resources necessary to expand its SDVOSB eligibility verification process to all contractors seeking to bid on SDVOSB contracts government-wide. Such an action is supported by the fact that VA maintains the database identifying which individuals are service-disabled veterans and is consistent with VA’s mission of service to veterans. In 2011, legislation was also introduced and passed in the Senate requiring all agencies to use VA’s VetBiz for SDVOSB contract awards; this legislation has not become law. However, as shown by our current work, VA’s program remains vulnerable to fraud and abuse because the agency has been unable to accurately track the status of its efforts and because potentially ineligible firms remain listed in VetBiz. Consequently, VA’s ability to show that its process is successful in reducing the SDVOSBs program’s vulnerability to fraud and abuse remains an important factor in any consideration about the potential expansion of VA’s eligibility verification process government-wide. GAO has ongoing work that will, in part, examine some of the key issues that need to be addressed if VA’s verification program were to be implemented government-wide. In 2009, we found that ineligible firms in 10 cases received $100 million in SDVOSB contracts and $300 million in other federal contracts. We referred all 10 of these cases to the appropriate agency OIGs. As of April 2012, while none of the firms are currently suspended or debarred by the agencies that received our referrals, some actions have been taken: The SBA OIG is proceeding with six open investigations. In addition, the SBA OIG has joined forces with other agency OIGs to pursue several cases. Specific details cannot be provided until the cases have been fully adjudicated. One individual related to a case is being prosecuted by the U.S. Attorney for wire fraud and fraud against the United States involving a contract valued at $1 million or more related to its misrepresentation as an SDVOSB. In addition, this individual and a related firm were suspended by the Department of Transportation for procurement fraud. One individual related to a case-study is being charged by the U.S. Attorney with conspiracy to commit wire fraud and forfeiture of his assets up to $400,000. This individual allegedly conspired to defraud the SBA and other government contractors by falsely representing his business as a service-disabled veteran-owned and operated business. Another case-study firm pled guilty to wire fraud in relation to fraudulently receiving Historically Underutilized Business Zone (HUBZone) federal contracts.ineligible for the SDVOSB program, in conjunction with the firm’s admitting defrauding the HUBZone program, raises the concern of ineligible firms applying for multiple procurement programs. Our previous finding that the case was Actions taken against firms that violate the SDVOSB program requirements should help protect the government’s interest and help discourage ineligible firms from abusing the SDVOSB program. As previously discussed, providing more emphasis on debarments and investigations could further help the government deter firms from attempting to fraudulently gain access to the SDVOSB program. The SDVOSB program has provided billions of dollars in contracting opportunities to deserving service-disabled veterans. However, our body of work, along with work by the DOD OIG and VA OIG, has found that the program is vulnerable to fraud and abuse, which has allowed millions of dollars to be awarded to ineligible firms. The government-wide program remains particularly vulnerable since it relies on an honor-system-like process whereby firms self-certify their eligibility. VA has the only program within the government dedicated to verifying SDVOSB firms’ eligibility; VA also has responsibility for maintaining a database of service-disabled veterans and a listing of firms that are eligible for the SDVOSB program. Given VA’s mission of service to veterans, we previously suggested that Congress consider expanding VA’s program government-wide to employ more effective fraud-prevention controls over the billions of dollars awarded to SDVOSBs outside of VA. However, such action should not be undertaken until VA demonstrates that its verification process is successful in reducing the SDVOSB program’s vulnerability to fraud and abuse. Furthermore, while the results of this most-recent assessment show that VA has made some progress in improving its verification process in response to the 2010 Act, it has made conflicting statements regarding the verification of firms and has been unable to accurately track the status of its efforts. These problems have resulted in thousands of potentially ineligible SDVOSBs receiving millions of dollars in sole-source and set- aside contract obligations. By better managing its inventory of firms, maintaining the accuracy of firms’ status in VetBiz, and applying the 2010 Act verification process to all firms, VA can be more confident that the billions of dollars meant to provide VA contracting opportunities to our nation’s service-disabled veteran entrepreneurs make it to the intended beneficiaries. To minimize potential fraud and abuse in VA’s SDVOSB program and provide reasonable assurance that legitimate SDVOSB firms obtain the benefits of this program, we recommend that the Secretary of Veterans Affairs ensure that all firms within VetBiz have undergone its 2010 Act verification process. Specifically, this should include consideration of the following three actions: (1) inventory firms listed in VetBiz to establish a reliable beginning point for the verification status of each firm; (2) establish procedures to maintain the accuracy of the status of all firms listed in VetBiz, including which verification process they have undergone; and (3) expeditiously verify all current VetBiz firms and new applicants under the 2010 Act verification procedures. We provided a draft of our report to VA and SBA for comment. In its written comments, reproduced in appendix I, VA stated that it concurred with our first two recommendations. It concurred “in principle” with the third, to verify all current VetBiz firms and new applicants under the processes implemented under the 2010 Act. With respect to this recommendation, VA noted that it implemented an interim rule on June 27, 2012, that extends the eligibility of verified firms to 2 years. VA told us it interprets “verified” to include any firm that has been verified under either its 2006 or 2010 Act processes. Therefore, according to the interim final rule, as long as a firm is verified under either process and is in its 2- year eligibility period, VA is only authorized to initiate a verification examination if it receives credible evidence calling into question a participant’s eligibility. Extending the eligibility period may allow VA to focus its efforts on more thoroughly verifying firms that were previously verified under its less-stringent 2006 Act process. However, the extension also allows thousands of firms to continue to be eligible for contracts even though they have not undergone the more thorough process. We acknowledge that VA has latitude under the law to modify its own regulations as necessary. However, the interim final rule in effect removes a backlog of firms and appears to be a self-created impediment delaying verification under the 2010 Act process. We remain convinced that the verification process utilized by VA prior to the 2010 Act process does not provide reasonable assurance that only eligible SDVOSBs participate in the program. Given this ongoing vulnerability to fraud and abuse, we continue to believe that VA should expeditiously verify current VetBiz firms and new applicants under the 2010 Act verification process. Despite these concurrences, VA commented that our report was misleading and inaccurate with respect to (1) our characterizations of a 2011 VA OIG report, (2) conflicting statements made by VA, and (3) VA’s implementation of our previously issued recommendations. We disagree. First, VA stated that our use of the VA OIG’s 2011 report was misleading because the report examined a period when the VetBiz database included self-certified firms in addition to firms verified under the processes implemented under the 2006 Act. VA also claims the VA OIG report contains excessive extrapolations because it examined eligibility requirements beyond ownership and control. Specifically, VA notes that 14 of the 42 firms reviewed for the OIG report had been through the verification process VA used in response to the 2006 Act and claims that only 3 were determined to be ineligible based on ownership and control. VA’s statement is incomplete and misleading. According to the OIG, an additional 7 were determined to be ineligible for reasons that could be identified during a robust verification process. As a result, the OIG found 10 of 14 firms verified under VA’s 2006 Act process to be ineligible—an eligibility failure rate comparable to the overall eligibility failure rate cited in the report. With regard to the aforementioned 7 firms, the OIG determined they were ineligible because they were engaged in improper subcontracting practices, such as “pass-through” contracts. Pass-through contracts occur when businesses or joint venture/partnerships list veterans or service disabled veterans as majority owners of the business but, contrary to programs requirements, the non-veteran owned business either performed or managed the majority of the work and received a majority of the contracts’ funds. Given that the firms being reviewed by the OIG already had existing contracts in place, the OIG was able to identify the pass-through contracts by conducting site visits and reviewing business documentation, the same steps that VA claims are taken during the verification process it implemented in response to the 2010 Act. While we acknowledge that it is difficult to identify pass-through contracts for applicants to the program who don’t have any preexisting contracts, VA should be conducting such a review for those firms that have contracts in place. As we have noted in past reports, VA’s fraud prevention controls should include detection and monitoring measures to assure that firms are completing the work required of an SDVOSB contract. Second, VA disagrees that it provided numerous conflicting statements to us regarding its verification efforts, stating that the verification process has evolved and that VA faces technical limitations related to its case- management system. While we acknowledge these concerns, it is important to note that VA did not provide us with any explanation as to its evolving priorities during the course of our audit and instead repeatedly sent us contradictory information without any clarification. Moreover, not all of the conflicting statements VA made can be attributed to inadequacies in its case-management system or to evolving priorities. Specifically, the information we received during the course of our audit work changed so significantly over such a short period of time that the evidence GAO collected does not support VA’s assertion that it “knows how many firms have been verified” and can “track individual firms,” as VA claims in its agency comment letter. Examples of the conflicting statements we received include the following: Removal of firms: On April 23, 2012, VA told us that about 900 SDVOSBs and VOSBs listed in VetBiz were targeted for removal because they had not been verified under the 2010 Act process. By April 27, 2012, this number increased to approximately 3,500 SDVOSBs and VOSBs. On May 2, 2012, we received two more differing accounts of SDVOSBs and VOSBs targeted for removal-- 2,660 firms and 2,646 firms--in the same email. Implementation of the 2010 Act process: On February 16, 2012, VA told us that it continued to verify firms under the process implemented under the 2006 Act between January and May 2011. Then, on April 23, 2012, VA told us that it stopped verifying firms under its 2006 Act process in February 2011 and began verification under its 2010 Act process at the same time. Next, on May 12, 2012, VA told us that it stopped verifying firms under the 2006 Act process in January 2011 and began verifying under the 2010 Act at the end of December 2010. In the same communication, VA told us that no firm was approved under its 2006 Act process after February 2011. But on May 21, 2012, VA sent us a list of firms and verification dates showing that multiple firms were last verified under its 2006 Act process past February 2011, with at least two firms verified under iits 2006 Act process as late as May 2011. Finally, VA stated it believed all previous GAO recommendations issued in October 2011 should be closed. For GAO to close a recommendation, it must be implemented or actions must have been taken that essentially meet the recommendation’s intent. Further, the responsible agency must provide evidence, with sufficient supporting documentation, that the actions are being implemented adequately. By the end of our audit work, we were able to close 6 of the 13 recommendations that we issued to VA in October 2011 based on documentation VA provided demonstrating that the agency had taken specific actions in response to our recommendations. Although VA indicated that it would like to close out the remaining recommendations, it either did not demonstrate that it had taken an action to implement a recommendation or did not provide the supporting documentation needed to show that the recommendation was in fact implemented. We had several discussions with VA staff about our requirements for closing recommendations, the last occurring on June 22, 2012. Moreover, we noted in our report any progress VA has made with respect to each recommendation; the information VA provided in this letter had previously been acknowledged in our report. For the 7 recommendations that remain open after the issuance of this report, we will continue to seek from VA additional documentation necessary to demonstrate that implementation has occurred. At such time, we will close each recommendation, as appropriate. In addition, VA provided technical comments, which we addressed as appropriate. We provide annotated responses to VA’s more detailed comments in appendix I. In written comments received through e-mail, SBA stated that it is committed to eliminating fraud, waste, and abuse in all of its programs including the government-wide SDVOSB program. In addition, SBA stated that it maintains a “robust and thorough” protest and appeal process. However, as noted in our report, SBA’s bid-protest process alone—that is, without upfront eligibility verification and other related measures—cannot provide reasonable assurance that only legitimate firms are awarded SDVOSB contracts. In addition, five new case studies developed for this report highlight instances of fraud and abuse. SBA disagreed with the draft report’s portrayal of actions taken against the firms that were the subject of the 10 case studies developed as part of our October 2009 report. We revised our report where appropriate. SBA also stated that it had taken actions against firms in addition to those cited in our case studies, but did not provide specific examples. Finally, SBA stated that it was implementing training to help its staff identify fraud and abuse and working to improve its referral process and collaboration with other agencies. Such efforts could help reduce the SDVOSB program’s vulnerability. However, these efforts would affect only SBA’s investigation and prosecution efforts, and not prevention, detection, and monitoring. If the government-wide program included measures to prevent, detect, and monitor fraud in the SDVOSB program, SBA could be more confident that the billions of dollars meant to provide contracting opportunities to our service-disabled veteran entrepreneurs make it to the intended beneficiaries. We are sending copies of this report to interested congressional committees, the Administrator of SBA, the Secretary of Veterans Affairs, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Richard J. Hillman at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. 1. We clarified the report to indicate what the Department of Veterans Affairs (VA) Office of Inspector General (OIG) reported on its findings in 2011 and also to indicate that the report includes all firms in VetBiz, not just those verified under the Veterans Benefits, Health Care, and Information Technology Act of 2006 (2006 Act) process. The remainder of VA’s comments related to the OIG report are inaccurate, based on our review of the report and discussions with VA’s OIG staff. See the Agency Comments and Our Evaluation section of this report for more detail. 2. In the final report, we deleted the draft report’s discussion of information about the Center for Veterans Enterprise (CVE) being responsible for helping veterans who are interested in forming or expanding their own small businesses. 3. Our report’s characterization of the Veterans Small Business Verification Act (2010 Act), part of the Veterans’ Benefits Act of 2010, is correct and we did not make associated changes to the report. While VA’s recommended change points out that VA removed firms that self-represented or had expired eligibility periods, these categories of firms are included by the “all unverified businesses” language in the existing report language. 4. We deleted the sentence stating that SDVOSBs are required to receive a portion of government-wide contractual dollars annually. 5. We have revised our draft report to note that according to VA, (1) the lack of a comprehensive case-management system has created the need for aggregate workarounds and resulted in inconsistent aggregate reporting, (2) the limitations of the case-management system make it difficult to track the inventory of firms, and (3) as the limitations of the case-management system increase over time, the potential of CVE to lose track of how many firms have been verified also increases. We also acknowledge VA’s assertion that its verification priorities have evolved over time. However, not all of the conflicting statements VA made can be attributed to inadequacies in its case-management system or to evolving priorities. One of the many examples relates to the December 2010 request for documentation mentioned in the 2010 Act. Specifically, on April 23, 2012, VA told us that between late March 2012 and early April 2012 it had removed over 3,000 SDVOSBs and VOSBs because these firms had failed to provide requested business documentation. We asked whether the firms removed in April 2012 had been sent this request. In response, VA told us that the firms removed in April 2012 did not receive the December 2010 document. Then on May 12, 2012, VA told us the firms had in fact been sent the December 2010 letter. Later, on June 20, 2012, VA told us that it did not send the December 2010 letter to all firms listed in VetBiz at the time to avoid a flood of applications. In its agency comments, VA states that the 2010 Act did not require it to send all firms listed in VetBiz in December 2010 a request for documentation if the firms had been verified under the 2006 Act and this verification had not yet expired. 6. We revised the text in our draft report to more-clearly reflect that thousands of SDVOSBs listed as eligible in VetBiz received millions of dollars in contract obligations even though they had not been verified under the more-thorough process that VA implemented in response to the 2010 Act. VA’s recommended changes also suggest that firms that were verified under the 2006 Act process could not be immediately reverified under the more-thorough 2010 Act process because, in addition to resource-allocation priorities, VA was limited by the requirements of 38 C.F.R. § 74.15(c). However, we note that VA has latitude under the law to modify its own regulations as necessary to ensure that only valid SDVOSBs are included in VetBiz. Furthermore, VA’s recent decision to amend 38 C.F.R. § 74.15 and extend the VetBiz eligibility term from 1 year to 2 years appears to be a self-created impediment to ensuring all firms expeditiously undergo the more-thorough 2010 Act process. 7. We revised the text in our report to reflect that the 2010 Act required VA to notify all unverified firms about the need to apply for verification. 8. The language VA objects to concerning VA’s prioritization of verifications under the 2010 Act process is taken directly from documentation provided by VA during the course of our audit. Accordingly, we made no changes to the report. 9. The language VA objects to concerning removal of firms is taken directly from oral and written statements made by VA during the course of our audit. Accordingly, we made no changes to the report. 10. The firms mentioned in this footnote are related to one of the new cases we reviewed as a result of allegations we received from confidential informants. These firms were not verified under the process implemented under the 2010 Act and we determined that they were in fact ineligible for the SDVOSB program because the firms’ operating agreements allowed the two minority owners to control the firms, rather than the service-disabled veteran. These firms received approximately $16 million in VA SDVOSB set-aside and sole-source contract obligations from October 2010 to December 2011. Accordingly, we made no changes to the report. 11. We revised our report to make clear that we were referring to verification using the processes implemented under the 2010 Act. 12. We received conflicting statements from VA as to which firms received the December 2010 notification later and have revised the text to clearly reflect this fact. 13. We revised the text in our report to more-clearly reflect that thousands of potentially ineligible firms remain listed in VetBiz because they have not been verified under the more-thorough process implemented for the 2010 Act. While these firms have been verified under the 2006 Act process, past audits show the potential risk of providing SDVOSB contracts to firms reviewed under this process. VA’s recommended change does not acknowledge this risk and is therefore incomplete. Moreover, our statements that were related to the number of firms not verified under the requirements of the 2010 Act, the dollar amounts those firms received, and the number of firms VA planned to remove were all supported by evidence and were accurate at the close of our audit work. We have clarified the report to indicate that fact and included information on the requirements of the interim final rule VA implemented on June 27, 2012. Specifically, in our final report we have noted that the rule extends a firm’s eligibility period for 2 years. We also note that VA interprets “verified” to include any firm that has been verified under either the 2006 Act or 2010 Act processes, meaning that this rule will allow thousands of firms to remain eligible for contracts even though they have not undergone the more- thorough process implemented under the 2010 Act process. See the Agency Comments and Our Evaluation section of this report for a more-thorough discussion of this issue. 14. To this point, VA has not provided sufficient documentation to close the 7 recommendations that remain open. GAO will continue to work with VA to confirm the status of its efforts to address our recommendations and will close recommendations, as long as necessary supporting evidence is provided. 15. Our report states that VA has made progress in the area of fraud- awareness training. However, VA has not provided any documentation to show that fraud-awareness training is being provided on a regular basis, as we recommended. Our recommendation will remain open until necessary evidence to close it is provided. Accordingly, we have not changed the language in our report. 16. The FAR and the VA Acquisition Regulations do not provide specific processes and criteria for the Debarment Committee on compliance with the requirement in the 2006 Act to debar, for a reasonable period of time, firms and related parties that misrepresented their SDVOSB status. VA should provide additional guidance to the Debarment Committee on the specific process and criteria to use to debar firms as required by the 2006 Act. Accordingly we have not changed the language in our report. 17. The recommendation requested that VA develop specific guidelines outlining the Debarment Committee's decision process to debar firms that misrepresent their SDVOSB status. VA needs to provide supporting documentation demonstrating that VA provided the Debarment Committee with the guidance outlining the decision process to debar firms that misrepresent their SDVOSB status. Accordingly, we have not changed the language in our report. 18. VA cites provisions of the FAR and the VA Acquisition Regulations containing guidance for continuing current contracts to firms that were found ineligible through the debarment process. However, our recommendation asked VA to develop procedures to remove SDVOSB contracts from ineligible firms. Accordingly, we have not changed the language in our report. 19. Our report acknowledges that VA advertises the debarments and prosecutions on the Debarment Committee, VA OIG, and CVE websites. However, our recommendation specifically asked for VA to formalize procedures to advertise debarments and prosecutions, and we have not received any documentation related to such procedures. Accordingly, we have not changed the language in our report. Service-Disabled Veteran-Owned Small Business Program: Governmentwide Fraud Prevention Control Weaknesses Leave Program Vulnerable to Fraud and Abuse, but VA Has Made Progress in Improving Its Verification Process. GAO-12-443T. Washington, D.C.: February 7, 2012. Service-Disabled Veteran-Owned Small Business Program: Additional Improvements to Fraud Prevention Controls Are Needed. GAO-12-205T. Washington, D.C.: November 30, 2011. Service-Disabled Veteran-Owned Small Business Program: Additional Improvements to Fraud Prevention Controls Are Needed. GAO-12-152R. Washington, D.C.: October 26, 2011. Service-Disabled Veteran-Owned Small Business Program: Preliminary Information on Actions Taken by Agencies to Address Fraud and Abuse and Remaining Vulnerabilities. GAO-11-589T. Washington, D.C.: July 28, 2011. Department of Veterans Affairs: Agency Has Exceeded Contracting Goals for Veteran-Owned Small Businesses, but It Faces Challenges with Its Verification Program. GAO-10-458. Washington, D.C.: May 28, 2010. Service-Disabled Veteran-Owned Small Business Program: Fraud Prevention Controls Needed to Improve Program Integrity. GAO-10-740T. Washington, D.C.: May 24, 2010. Service-Disabled Veteran-Owned Small Business Program: Case Studies Show Fraud and Abuse Allowed Ineligible Firms to Obtain Millions of Dollars in Contracts. GAO-10-306T. Washington, D.C.: December 16, 2009. Service-Disabled Veteran-Owned Small Business Program: Case Studies Show Fraud and Abuse Allowed Ineligible Firms to Obtain Millions of Dollars in Contracts. GAO-10-255T. Washington, D.C.: November 19, 2009. Service-Disabled Veteran-Owned Small Business Program: Case Studies Show Fraud and Abuse Allowed Ineligible Firms to Obtain Millions of Dollars in Contracts. GAO-10-108. Washington, D.C.: October 23, 2009.
|
The SDVOSB program provides federal contracting opportunities to business-owning veterans who incurred or aggravated disabilities in the line of duty. SBA administers the government-wide program, while VA maintains databases of veterans and SDVOSBs and oversees its own contracts. GAO has reported several times since 2009 that both programs were vulnerable to fraud and abuse and recommended improvements. In October 2010, Congress passed the Veterans Small Business Verification Act (2010 Act), part of the Veterans Benefits Act of 2010, to provide tools to VA to more-thoroughly validate firms eligibility before listing them in VetBiz, the database used by VA contracting officials to award SDVOSB contracts. GAO was asked to assess (1) VAs progress in addressing remaining vulnerabilities to fraud and abuse in its SDVOSB program and (2) actions taken by SBA or other federal agencies to improve government-wide SDVOSB fraud-prevention controls. GAO reviewed agency documentation and interviewed agency officials. GAO also investigated cases of alleged fraud and abuse. GAO did not project the extent of fraud and abuse in the program. The Department of Veterans Affairs (VA) Service-Disabled Veteran-Owned Small Business (SDVOSB) program remains vulnerable to fraud and abuse. VA has made inconsistent statements about its progress verifying firms listed in VetBiz using the more-thorough process the agency implemented in response to the Veterans Small Business Verification Act (2010 Act). In one communication, VA stated that as of February 2011, all new verifications would use the 2010 Act process going forward. However, as of April 1, 2012, 3,717 of the 6,178 SDVOSB firms (60 percent) listed as eligible in VetBiz had not been verified under the 2010 Act process. Of these 3,717 firms, 134 received $90 million in new VA SDVOSB set-aside or sole-source contract obligations from November 30, 2011, to April 1, 2012. While the 2010 Act did not include a deadline for verification using the more-thorough process, the presence of firms that have only been subjected to the less-stringent process that VA previously used represents a continuing vulnerability. VAs Office of Inspector General (OIG) reported that the less-stringent process was in many cases insufficient to establish control and ownership and in effect allowed businesses to self-certify as SDVOSBs with little supporting documentation. VA has taken some positive action to enhance its fraud prevention efforts by establishing processes in response to 6 of 13 recommendations GAO issued in October 2011, including conducting unannounced site visits to high-risk firms and developing procedures for referring suspicious SDVOSB applications to the OIG. VA has also begun action on some remaining recommendations, such as providing fraud awareness training and removing contracts from ineligible firms, though these procedures need to be finalized. Regarding the government-wide SDVOSB program, no action has been taken by agencies to improve fraud-prevention controls. Relying almost solely on firms self-certification, the program continues to lack controls to prevent fraud and abuse. The Small Business Administration (SBA) does not verify firms eligibility status, nor does it require that they submit supporting documentation. While SBA is under no statutory obligation to create a verification process, five new cases of potentially ineligible firms highlight the danger of taking no action. These firms received approximately $190 million in SDVOSB contract obligations. In one case, a firm found ineligible by VA continued to self-certify as an SDVOSB and received about $860,000 from the General Services Administration and the Department of Interior. Further, the Department of Defense (DOD) OIG reported in 2012 that DOD provided $340 million to firms that potentially misstated their SDVOSB status. To address these vulnerabilities, GAO previously suggested that Congress consider providing VA with the authority necessary to expand its SDVOSB eligibility verification process government-wide. Such an action is supported by the fact that VA maintains the database identifying which individuals are service-disabled veterans and is consistent with VAs mission of service to veterans. However, the problems GAO identified with VAs verification process indicate that an expansion of VAs authority to address government-wide program problems should not be undertaken until VA demonstrates that its process is successful in reducing its own SDVOSB programs vulnerability to fraud and abuse. GAO recommends that VA take steps to ensure that all firms within VetBiz have undergone the 2010 Act verification process. VA generally concurred with the recommendation but expressed concern about how specific report language characterized its program. GAO made some changes to the report but continues to believe that the program remains vulnerable to fraud and abuse.
|
DOD has acknowledged that long-standing deficiencies in its internal controls, business systems, and processes have prevented it from being able to demonstrate that its financial statements are reliable. DOD spends billions of dollars annually to maintain key business processes and operations and to acquire modern systems. However, progress in making system and process improvements has been slow and has hindered DOD’s ability to achieve financial audit readiness. DOD has undertaken several financial management improvement initiatives over the years to address deficiencies in business systems, processes, and controls through its Financial Improvement and Audit Readiness (FIAR) Plan and financial management reform methodology contained in its FIAR Guidance. DOD’s FIAR Guidance provides a standard, multiphased methodology that DOD components should follow to assess their financial management processes and controls and in developing and implementing financial improvement plans. These plans, in turn, are intended to provide a framework for planning, executing, and tracking essential steps and related supporting documentation needed to achieve auditability. Congress mandated in the National Defense Authorization Act (NDAA) for Fiscal Year 2010 that DOD develop and maintain a FIAR Plan that includes the specific actions to be taken and costs associated with (1) correcting the financial management deficiencies that impair DOD’s ability to prepare complete, reliable, and timely financial management information and (2) ensuring that DOD’s financial statements are validated as ready for audit by September 30, 2017. In addition, the 2010 NDAA required that DOD provide recurring, semiannual reports to the congressional defense committees not later than May 15 and November 15 on the status of the department’s implementation of the FIAR Plan. Additionally, the 2010 NDAA required the first semiannual report to address DOD’s actions to develop standardized guidance for DOD components’ financial improvement plans, define oversight roles, and assign accountability for carrying out the FIAR Plan to appropriate officials and organizations. The NDAA for Fiscal Year 2013 additionally required that the FIAR Plan Status Reports include (1) a description of the actions that each military department has taken to achieve an auditable SBR for DOD no later than September 30, 2014, and (2) a determination by each military department’s Chief Management Officer on whether the unit would be able to achieve an auditable SBR no later than September 30, 2014, without an unaffordable or unsustainable level of onetime fixes and manual work-arounds and without delaying the auditability of the financial statements. In the event that the Chief Management Officer of a military department determined that the military department would not be able to achieve an auditable SBR by that date, the Chief Management Officer was required to explain why the military department could not meet that date and provide an alternative date for achieving an auditable SBR along with a plan to meet the alternative date. In the November 2014 FIAR Plan Status Report, DOD acknowledged that it did not meet the September 30, 2014, target date for achieving audit readiness of the SBR (which reflects budgetary activity across multiple years), but stated that the three military services asserted audit readiness of their Budgetary Schedules for fiscal year 2015 (which reflect activity for 1 year) in the last quarter of fiscal year 2014. The NDAA for Fiscal Year 2014 mandated that the Secretary of Defense ensure that a full audit is performed on DOD’s fiscal year 2018 financial statements and submit the results of that audit to Congress no later than March 31, 2019. Further detail on the distinctions between the SBR and Budgetary Schedule is provided in appendix II. The FIAR Guidance was first issued by the DOD Comptroller in May 2010 and provides a standardized methodology for DOD components to follow for achieving financial management improvements and audit readiness objectives. However, according to DOD Comptroller officials, in applying this guidance during the initial years of discovery and documentation, they spent an inordinate amount of time validating the military services’ management assertions, which delayed an independent review by the DOD OIG or an IPA. As a result, DOD Comptroller officials revised their approach to leverage independent reviews by the auditors to focus resources and remediation efforts on most critical deficiencies. To emphasize the need for the military services to take ownership of their own audit readiness assertions for their Budgetary Schedules, this approach was applied to the decision to proceed with the audits of the fiscal year 2015 Budgetary Schedules. According to DOD Comptroller officials, they participated in ongoing dialogue with the military services at audit readiness oversight meetings and reviewed their progress in key areas, but the military services did not have to provide detailed documentation to the DOD Comptroller’s office in support of their assertions of audit readiness for their Budgetary Schedules. Instead, each military service submitted a memo to the DOD OIG prior to September 30, 2014, asserting that its Budgetary Schedule was either ready or would be ready for audit beginning with fiscal year 2015. In their assertion memos, the Army stated that its Budgetary Schedule was ready for audit and the Air Force stated that its schedule would be ready. The Army and Air Force assertion memos stated that the assertions depended on planned corrective actions (e.g., producing a complete population of transactions and implementing service provider integration). The Navy’s assertion memo stated that its Budgetary Schedule was ready for audit based on the results of the multiyear effort to document, test, and remediate known control deficiencies related to its procedures, processes, controls, and financial systems. Each military service is organized into two reporting entities: a general fund and a working capital fund. A military service’s general fund account structure includes five major groups: (1) military personnel; (2) operations, readiness and support; (3) procurement; (4) research, development, test and evaluation; and (5) family housing/military construction. Each military service’s programs are authorized by Congress in annual NDAAs, and each military service receives appropriations from Congress through annual appropriations acts. Table 1 presents information on each military service’s fiscal year 2015 general fund appropriations, personnel, and locations. The Army, Navy, and Air Force asserted audit readiness for their Budgetary Schedules in 2014 and underwent their first Budgetary Schedule audits for fiscal year 2015. The IPAs for all three military services issued disclaimers of opinion on the respective services’ fiscal year 2015 Budgetary Schedules and identified material weaknesses in internal control. For example, all three IPAs reported findings related to the military services’ inability to (1) provide complete populations of transactions, (2) provide documentation to support transactions, (3) effectively implement information system controls to protect their financial data in both general ledgers and related feeder systems, and (4) exercise sufficient oversight of their service providers. The IPAs for all three military services reported each of these reportable findings as a stand-alone material weakness or as part of a larger, combined material weakness. Army, Navy, and Air Force management generally concurred with the findings in the respective IPA reports and stated that they will develop and execute corrective actions to address the IPAs’ related findings. The IPAs for all three military services reported that they were unable to identify complete populations of transactions for the respective services’ fiscal year 2015 Budgetary Schedules. This reportable finding contributed to each IPA’s disclaimer of opinion. For financial statements to be considered reliable, they must reflect the results of all significant financial activity (transactions) during the reporting period (e.g., 1 fiscal year for the Budgetary Schedules). When performing a financial statement audit, one of the assertions that the auditor evaluates is completeness, which pertains to whether all transactions and events that should have been recorded in the financial statements were recorded. Most military service transactions are initially processed and recorded in information technology systems called feeder systems. For example, payroll transactions are processed in a military service’s payroll system. Transactions processed in feeder systems should eventually be transferred to the military service’s general ledger where all transactions are accumulated. At the end of a reporting period, each military service’s general ledger data are transferred to DOD’s financial reporting system, which summarizes the financial data according to the line items that are ultimately reported in the financial statements. Figure 1 illustrates how financial data for the military services are initially processed, then combined with other data as they are transferred from one automated system to another, and ultimately result in the Budgetary Schedule. When data are transferred from one system to another, interface controls should be in place to reasonably assure that the data are transferred accurately, timely, and completely. The objectives of interface controls are to implement effective (1) interface strategy and design and (2) interface processing procedures. Effective interface procedures reasonably assure that interfaces are processed completely, accurately, and only once in the proper period; interface errors are rejected, isolated, and corrected in a timely manner; and access to interface data and processes is properly restricted. The systems should be designed with balancing controls, such as control totals and record counts, to reasonably assure that data are controlled. Also, the entity should have effective procedures to reconcile control information between the two systems. If the reconciliation identifies differences, these differences should be researched to determine their causes and any errors corrected. This reconciliation process helps ensure that the results of all transactions occurring during the reporting period are accurately reported in the financial statements. In conducting the audits of the military services’ Budgetary Schedules, the IPAs reviewed the military services’ reconciliation processes to determine whether all transactions were completely transferred from military services’ feeder systems to the general ledgers and, ultimately, to the DOD-wide financial reporting system. The IPAs reported that the military services did not have sufficient reconciliation processes in place to reasonably assure that the general ledgers included all of the transactions that should be included in the Budgetary Schedules, increasing the risk that the Budgetary Schedules did not reflect the results of all budgetary transactions that occurred. The following are examples of completeness issues reported by the IPAs. The Army was unable to reconcile the first quarter fiscal year 2015 population of civilian payroll transactions from a civilian pay feeder system to one of its four general ledger systems. The Navy had no assurance that transactions were completely and accurately recorded in its four general ledger systems because it has not designed and implemented sustainable and recurring manual and automated reconciliations with its more than 100 feeder systems. The Air Force’s nonintegrated information technology system environment requires both manual reentry of data into multiple systems and complex system interfaces. The auditors found that the Air Force did not always reconcile the data after entering the data manually or transferring the data from interfacing systems. The auditors reported that many of the reconciliations were newly implemented and not in place during the entire fiscal year or were not performed through the end of the fiscal year. The IPAs for all three military services noted the lack of adequate supporting documentation as a reportable finding on their Budgetary Schedule audits, which also contributed to each IPA’s disclaimer of opinion. Appropriate documentation for financial transactions allows the military services to support financial statement line items and allows auditors to test line items. For all three military services, auditors found that adequate documentation to support disbursements and obligations was not always available. The lack of adequate documentation increased the risk of a misstatement on the Budgetary Schedules. According to Standards for Internal Control in the Federal Government, developing and maintaining thorough and accurate documentation to support financial transactions is essential to management’s ability to effectively monitor financial transactions and provide reasonable assurance that internal controls are in place and operating as intended. Accurate documentation also allows management to correct errors timely and safeguard assets. Additionally, appropriate documentation of financial transactions allows support to be readily available for examination by an auditor. The following are examples of supporting documentation issues reported by the IPAs: The Army did not have documentation readily available to demonstrate that specific transactions were properly reported in the Budgetary Schedule. Specifically, the auditors reported that documentation supporting contractual services, military payroll, civilian payroll, reimbursable authority, disbursement, and collection transactions was either not available, was insufficient, or did not agree to the general ledger detail. The Navy did not always have the underlying detail for journal vouchers or sometimes lacked complete explanations for the purpose of journal vouchers. The auditors reported that in some cases, journal vouchers were used to adjust amounts to agree with Department of the Treasury (Treasury) or trading partner balances, without underlying support for the adjustment amounts. The Air Force did not always maintain adequate documentation to support disbursements and obligations. The auditors reported a lack of adequate supporting documentation for transactions related to travel expenses and payments made to vendors and contractors for purchases of goods and services. The IPAs for all three military services identified a lack of adequate information systems general controls as another reportable finding, which precluded the IPAs’ ability to rely on the services’ financial data and thus contributed to their disclaimers of opinion. The military services’ ability to efficiently and effectively manage and oversee their day-to-day operations and programs relies heavily on the capacity of their financial management information systems to produce complete, reliable, timely, and consistent financial information. The IPAs for all three military services identified information systems general control deficiencies related to access controls, segregation of duties, and configuration management. In addition, the IPA for the Army identified information systems general control deficiencies in security management and contingency planning; the IPA for the Navy reported information system general control deficiencies in security management and information system control deficiencies related to interface controls; and the IPA for the Air Force reported information system control deficiencies related to interface controls. The following are examples of information system control deficiencies reported by the IPAs. The Army did not consistently perform effective daily operating system backup procedures or maintain evidence of operating system and database backups when performed for certain financial systems. As a result, the IPA concluded that this condition could affect the Army’s ability to provide financial data that are complete, valid, and accurate. Further, the IPA found that the Army and its service providers had not implemented sufficient, effective information system general controls to protect the Army’s general ledgers and related feeder systems’ financial data. The Navy did not consistently implement effective interface controls between its systems and its service providers. As a result, the IPA found that the Navy was unable to reasonably assure the completeness and accuracy of financial data flowing between its systems and the service providers. The IPA found that the Navy also lacked effective information system controls over its general ledger systems and financial feeder systems and had pervasive control deficiencies in its decentralized information system environment. The Air Force did not have controls in place to prevent certain individuals from controlling key aspects of computer-related operations; as a result, the IPA found that unauthorized access to systems and system information and unauthorized actions had occurred. Multiple systems allowed a significant number of administrator users the authority to complete an entire functional process by inputting, processing, and approving transactions. Additionally, the IPA reported that developers were granted inappropriate access to make modifications directly to the production environment and delete system files. The IPAs identified insufficient oversight by the Army, Navy, and Air Force of DOD service organizations, also referred to as service providers, as a reportable finding that also contributed to each IPA’s disclaimer of opinion. Specifically, the IPAs for all three military services found that the services did not exercise sufficient oversight of their service providers responsible for performing financial reporting activities to ensure completeness, accuracy, and validity of the financial data reported and evaluate the complementary user entity controls included in the Statement on Standards for Attestation Engagements (SSAE) No. 16 reports to determine design and operating effectiveness. The Army, Navy, and Air Force utilize many service providers to improve efficiency and standardize business operations. Among the many service providers within DOD are the Defense Finance and Accounting Service (DFAS), Defense Information Systems Agency, Defense Logistics Agency, and Defense Contract Management Agency. According to the FIAR Guidance, the Army, Navy, and Air Force rely on these DOD service organizations to provide a variety of accounting, personnel, logistics, and system operations services. Each of the reporting entities—the Army, Navy, and Air Force—is ultimately responsible for ensuring that all key processes, systems, internal controls (including those performed by service organizations), and supporting documentation affecting its financial reporting objectives are audit ready. However, service providers working with reporting entities are also responsible for executing audit readiness activities surrounding service provider systems and data, processes and internal controls, and supporting documentation that have a direct effect on the reporting entities’ audit readiness. Therefore, to ensure successful completion of audit readiness tasks, the reporting entity and service provider must agree on the roles and responsibilities for the authorization, initiation, processing, recording, and reporting of transactions; the information technology controls affected by the service provider; or both. The FIAR Guidance states that a shared understanding and agreement between the service provider and reporting entity on these roles and responsibilities must be documented in a service-level agreement or memorandum of understanding. According to the FIAR Guidance, these mutual responsibilities include maintaining open communications and coordinating with one another; establishing common expectations in writing; providing additional system and financial information within agreed- providing access to subject matter experts or contractors supporting those organizations within agreed-upon time frames; working together to discover and correct audit impediments; and establishing a common, detailed understanding of the method for obtaining assurance. According to the FIAR Guidance, reporting entity management is responsible for the internal control over their financial information and therefore must ensure that it understands what financially significant activities are outsourced to service providers and the effectiveness of the service providers’ related internal controls. In turn, each service provider is responsible for providing a description of its controls that may affect its customer reporting entities’ control environments, risk assessment, control activities, and information and communication systems. Appendix D to Office of Management and Budget (OMB) Circular A-123, “Compliance with the Federal Financial Management Improvement Act of 1996,” requires each service provider to provide a Report on Controls at a Service Organization Relevant to User Entities’ Internal Control over Financial Reporting to its customers or allow customer auditors to perform appropriate tests of internal controls at its organizations. These reports are an important tool for agency management and auditors to use in evaluating the effect of the controls at the service organization on the user entities’ controls for financial reporting. The following are IPA-identified examples of insufficient oversight by the Army, Navy, and Air Force of their service providers. The Army did not have policies and procedures to assess service providers that host or manage financial systems that support accounts reported on the Army's Budgetary Schedule. Specifically, the IPA found that the Army did not document its understanding of the services provided or the related SSAE No. 16 reports so that it could determine whether the scope of these reports met the Army’s needs for obtaining assurance regarding service provider controls. The Navy did not implement effective controls over its service provider systems. As a result, the IPA (1) encountered difficulties in identifying key points of contact within the Navy, (2) reported that appropriate service-level agreements with the Navy’s service providers were not fully developed, and (3) noted that Navy personnel did not periodically review available SSAE No. 16 reports. The Air Force compiled a list of complementary user entity controls; however, the IPA found that it did not validate the operating effectiveness of the controls or verify the accuracy and completeness of the complementary user entity controls list. The IPAs for the Army, Navy, and Air Force collectively issued over 700 notices of findings and recommendations to the respective military services during the course of the fiscal year 2015 Budgetary Schedule audits. Each notice had one or more findings and discussed deficiencies that the IPA identified during the audit along with a corresponding recommendation(s) for addressing the deficiencies. These deficiencies pertained primarily to internal control deficiencies, with almost 75 percent of these related to information systems. In addition, the military services received findings and recommendations related to financial management deficiencies identified during audits performed by other audit organizations, including the DOD OIG and GAO, which also must be remediated. Also, management may have identified financial management deficiencies, for example, through OMB Circular A-123 reviews or management studies. Each military service is responsible for establishing its own processes for addressing these findings and recommendations by (1) identifying and tracking them, (2) prioritizing them, (3) developing corrective action plans (CAP) to address them, and (4) monitoring the status of CAP implementation. However, we found to varying degrees that each service did not have sufficient processes for doing so. While the military services should have already had such processes in place to manage findings and recommendations resulting from any audit, the need to effectively implement these processes has become more important in light of (1) the many findings and recommendations that resulted from the Budgetary Schedule audits, (2) future audits that will have a broader scope of work and may therefore identify additional findings, and (3) the short period remaining before the fiscal year 2018 audits must occur. For example, with the large number of findings resulting from the Budgetary Schedule audits and the effort it will take to address them all, prioritization of deficiencies that preclude the ability to audit the financial statements is crucial for the military services to achieve audit readiness. We compared the military services’ existing processes to the guidance and standards defined in the Implementation Guide for OMB Circular A- 123, Management’s Responsibility for Internal Control, Appendix A, Internal Control over Financial Reporting (Implementation Guide for OMB Circular A-123); the FIAR Guidance; and Standards for Internal Control in the Federal Government. According to these sources, which we used as criteria in evaluating the design of the services’ existing processes, a sound approach to addressing financial management deficiencies would include the following four elements. 1. Identifying and tracking audit findings and recommendations. Federal internal control standards require that managers (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations; (2) determine proper actions in response to findings and recommendations from audits and reviews; and (3) complete, within established time frames, all actions that are needed to correct or otherwise resolve the matters brought to management’s attention. To ensure that these actions are taken, management needs a means—that is, accurate and adequate documentation—to keep track of the findings and recommendations. 2. Prioritizing findings and recommendations. The Implementation Guide for OMB Circular A-123 and the April 2016 FIAR Guidance state that the extent to which corrective actions are tracked should be commensurate with the severity of the deficiency. The Implementation Guide for OMB Circular A-123 also states that an agency’s senior assessment team will work with the responsible officials and personnel to determine which deficiencies are cost beneficial to correct. 3. Developing CAPs. The Implementation Guide for OMB Circular A- 123 states that a CAP, including targeted milestones and completion dates, will be drafted and progress will be monitored. The elements of a CAP include a summary description of the deficiency; the year the deficiency was first identified; the target corrective action date (the date of management follow- up); the agency official responsible for monitoring progress; the indicators, statistics, or metrics used to gauge resolution progress (in advance of audit follow-up) in order to validate the resolution of the deficiency (referred to as outcome measures for assessing the effectiveness of the corrective actions for purposes of this report); and the quantifiable target or otherwise qualitative characteristic that reports how resolution activities are progressing (referred to as interim milestones for monitoring progress on interim actions for purposes of this report). DOD’s FIAR Guidance states that CAPs should be developed for all material weaknesses and that progress in implementing these plans should be periodically assessed and reported to management, which should track progress to ensure timely and effective results. For significant deficiencies, as well as nonsignificant deficiencies that were not externally reported, CAPs should be developed and tracked internally at the appropriate level. 4. Monitoring the status of CAP implementation. Federal internal control standards state that the resolution process begins when audit or other review results are reported to management, and is completed only after action has been taken that (1) corrects identified deficiencies, (2) produces improvements, or (3) demonstrates that the findings and recommendations do not warrant management action. The Implementation Guide for OMB Circular A-123 states that an entity’s senior management council, or similar forum(s), has ownership and accountability for resolving deficiencies. These forums should use CAPs as a guide or road map for discussion as well as in determining when sufficient action has been taken to declare that a deficiency has been corrected. According to DOD’s FIAR Guidance, management’s process for resolution and corrective action of identified material weaknesses in internal control must do the following: Provide for appointment of an overall corrective action accountability official from senior agency management. The official should report to the agency’s senior management council, if applicable. Maintain accurate records of the status of the identified material weaknesses through the entire process of resolution. Assure that CAPs are consistent with laws, regulations, and DOD policy. Assure that performance appraisals of appropriate officials reflect effectiveness in resolving or implementing corrective actions for identified material weaknesses. The FIAR Guidance further states that a determination that a deficiency has been corrected should be made only when sufficient corrective actions have been taken and the desired results achieved. This determination should be in writing and, along with other appropriate documentation supporting the determination, should be available for review by appropriate officials. After comparing the criteria under the four elements above with the design of the Army, Navy, and Air Force policies and procedures for tracking and resolving audit findings, we found that while Navy had sufficiently drafted policies and procedures to address three out of the four elements, the Army’s and Air Force’s policies and procedures did not sufficiently address any of the four elements, as detailed below. The Army’s process for managing the remediation of financial management deficiencies was not comprehensive, and portions of it were evolving as it began to address the financial management-related findings and recommendations from the fiscal year 2015 Budgetary Schedule audit. The Army’s regulation regarding audits of the Army includes general guidance establishing responsibility for responding to audit findings from different sources, such as the Army Audit Agency, DOD OIG, GAO, and IPAs. This regulation primarily focuses on the role of the Army Audit Agency and states that the Army Audit Agency is responsible for forwarding external audit reports to the appropriate principal Army official who has responsibility for responding to a given report. In addition, the Army organization with overall responsibility for the results from the Budgetary Schedule audit has its own policies and procedures for responding to audit recommendations. However, the Army lacks procedures for reasonably assuring that it identifies and tracks all financial management findings and recommendations. In addition, the Army’s policies and procedures (1) did not provide sufficient details on how to prioritize financial management findings and recommendations, (2) were not consistent with the Implementation Guide for OMB Circular A-123 for developing CAPs, and (3) did not sufficiently describe how the status of corrective actions should be monitored. The status of each open finding and recommendation pertaining to Army financial management issues is supposed to be tracked by one of two organizations. Specifically, under the Assistant Secretary of the Army, Financial Management and Comptroller (ASA(FM&C)), (1) the Accountability and Audit Readiness Directorate (hereafter referred to as the Audit Readiness Directorate) is responsible for tracking findings and recommendations from the Budgetary Schedule audit, as well as other financial statement-related audits and audit readiness activities, and (2) the Internal Review Directorate is responsible for tracking other financial management-related findings and recommendations, primarily from the DOD OIG and GAO but also some from the Army Audit Agency, that pertain to the ASA(FM&C). When they receive new audit recommendations, each of these directorates determines which Army organization is responsible for addressing each recommendation under its purview. We reviewed the tracking procedures for each of these directorates and found that they varied. Specifically, the Audit Readiness Directorate uses a spreadsheet called a CAP tracker to keep track of the hundreds of findings and recommendations issued during the audit of the Army’s fiscal year 2015 Budgetary Schedule, as well as the status of CAPs developed to remediate them. During our audit, we looked at the CAP tracker in March 2016 and found that it consisted primarily of findings and recommendations from the Budgetary Schedule audit, but also included two findings and recommendations each from the DOD OIG and the Army Audit Agency. These other recommendations were included in the CAP tracker because they also pertained to audit readiness issues. The CAP tracker is mentioned in the Audit Readiness Directorate’s Standard Operating Procedure (SOP) for follow-up actions needed to respond to financial statement audits and other audit readiness activities, which it finalized in May 2016. However, the SOP does not provide any details about what information for each finding and recommendation should be in the CAP tracker, such as interim milestones and outcome measures. The Internal Review Directorate also uses a spreadsheet to track the findings and recommendations that it is responsible for tracking. However, according to an Internal Review Directorate official, the directorate did not have an SOP or other written policies and procedures pertaining to its tracking of audit recommendations or to any other aspect of managing the audit remediation process and related corrective actions to address them. While reviewing the CAP tracker in March 2016, we identified 42 unresolved recommendations related to financial management that resulted from prior audits conducted by the Army Audit Agency, the DOD OIG, and GAO and that should have been tracked by the Internal Review Directorate. We found that as of August 2016, the Internal Review Directorate was tracking 37 of these recommendations while the other 5 open recommendations were not included in its tracker. An Internal Review Directorate official said that these 5 recommendations, all of which stemmed from a GAO report from fiscal year 2013, were made before he began working in the directorate, and therefore, he did not know why they were not included. According to this official, neither the Internal Review Directorate nor anyone else within ASA(FM&C) had established procedures to ensure that all financial management-related recommendations were being tracked within this organization. Without policies and procedures that clearly specify how audit findings and recommendations should be tracked and which types of findings and recommendations should be tracked by each of the two Army organizations described above—the Audit Readiness Directorate and the Internal Review Directorate—it can be difficult to hold either organization accountable and reasonably assure that procedures are followed consistently, particularly when there is staff turnover. If findings and recommendations are overlooked and do not get tracked by either organization, they are less likely to be remediated in a timely manner or possibly at all. The Audit Readiness Directorate’s SOP includes general guidance for prioritizing audit findings that is based on direction and criteria outlined in the FIAR Guidance. Specifically, the SOP cites the following criteria for prioritization: severity of deficiencies (material weakness or significant deficiency); designation as a FIAR deal-breaker; findings that reference a documentation gap; findings that are pervasive across business processes; and finding sensitivity (e.g., failure of good stewardship of government resources). However, the SOP does not identify which of the criteria might be a higher priority than others. Consequently, if the Audit Readiness Directorate followed the SOP criteria for prioritization, most of the over 200 Budgetary Schedule audit findings would be considered a priority, which would defeat the intent of identifying priorities. To avoid this, Army officials said that based on input from the IPA they decided to narrow the criteria for prioritizing the Budgetary Schedule findings and recommendations to those related to the critical areas of completeness, documentation, Fund Balance with Treasury, and information systems. Using these criteria, Army officials said they identified specific findings and recommendations, and the related CAPs, as high priority. These criteria are a subset of the criteria in the Audit Readiness Directorate’s SOP as these issues were reported as material weaknesses by the IPA and three of the four areas are also considered deal-breakers in the FIAR Guidance. However, these more narrow criteria are not yet included in the Audit Readiness Directorate’s SOP. The Internal Review Directorate does not have any policies or procedures for prioritizing the audit recommendations that it tracks. Instead, according to a directorate official, the action officers responsible for specific recommendations are in the best position to determine priorities as they are more familiar with the conditions that generated each recommendation. Without sufficiently detailed policies and procedures for consistently and systematically prioritizing audit findings, the Army is at increased risk of not identifying and focusing its efforts on its most critical financial management weaknesses, and thereby not taking the steps necessary to resolve them at the earliest possible date. Only one of the two Army organizations responsible for remediating audit findings had documented guidance for developing CAPs. The Audit Readiness Directorate’s SOP describes the procedures for developing CAPs to address findings from financial statement audits and audit readiness activities. For example, it states that the Army official responsible for developing a CAP should identify the root cause of the finding related to the CAP. It also states that CAPs should be developed using the format and template provided by the Audit Readiness Directorate. This template includes most of the elements recommended in the Implementation Guide for OMB Circular A-123, including a description of the deficiency, the responsible official, interim milestones, outcome measures, and an estimated completion date. However, the Internal Review Directorate does not have any policies requiring the development of CAPs or any other procedures that should be followed to remediate the audit recommendations that it tracks, according to a directorate official. Instead, the official said that after the Internal Review Directorate forwards a report to an action officer for remediation, the action officer will provide the directorate an estimated completion date for each recommendation that the officer is responsible for. When responsible organizations or officials believe that recommendations have been remediated, they send descriptions of the actions taken to the Internal Review Directorate with messages indicating that the recommendations can be closed. These procedures were not documented and do not include the elements recommended in the Implementation Guide for OMB Circular A-123. Without documented procedures for developing CAPs for all audit findings, the Army is at increased risk of not developing CAPs for all findings or developing CAPs that do not include all necessary elements. Once a plan for a corrective action is complete, the CAP is expected to be carried out and monitored by senior management, as recommended by the Implementation Guide for OMB Circular A-123. According to Army officials, any detailed documentation of CAP implementation is maintained by the person responsible for carrying out the plan. We found that this was the case for the remediation of all financial management- related recommendations, whether they fell under the responsibility of the Audit Readiness Directorate or the Internal Review Directorate. Because the Internal Review Directorate does not require CAPs for the audit recommendations under its purview, it also does not monitor the status of CAP implementation. Rather, it only maintains information about when the corrective actions for a given audit recommendation are considered complete and the recommendation can therefore be closed. For the audit recommendations that fall under the Audit Readiness Directorate, the directorate’s SOP states that the directorate is responsible for monitoring the progress of the CAPs and provides some information on how the monitoring should be conducted. For example, it states that monitoring is carried out via regularly scheduled meetings during which the senior responsible officials for each CAP provide briefings about the status of corrective actions. The SOP mentions the following types of meetings: Synchronization calls are held weekly between the Audit Readiness Directorate and one of two different groups that alternate every other week. According to Army officials, one group includes the Enterprise Resource Planning (ERP) managers, feeder system managers, and DFAS, while the other group includes the Army commands—that is, business process areas. The Army Audit Committee meets bimonthly and is chaired by the ASA(FM&C). Army officials told us that these meetings include all of the Deputy ASAs, Army principals and major commands, as well as DOD management such as the DCMO and officials from the DOD Comptroller’s office, including the Deputy Chief Financial Officer. However, the SOP does not provide complete information about all of the monitoring activities that take place. For example, Army officials told us that monitoring of the CAPs takes place at other meetings as well, although these meetings are not described in the SOP. These meetings included the following: The Senior Level Steering Group/Senior Assessment Team, which represents senior Army management responsible for monitoring development and implementation of the CAPs. This group meets quarterly with Army commands and service providers to review the status of the CAPs. Audit Status Update meetings, which are held biweekly, and during which the Audit Readiness Directorate provides updates to the Army Comptroller. Audit Readiness Directorate’s “Stand-Up” meetings, which are held twice a week to communicate with the field on the status of the CAPs. One meeting per week focuses on Budgetary Schedule issues, while the other meeting focuses on the Balance Sheet. According to Army officials, any decisions or action items resulting from these various meetings are documented, but we found that the requirements for documenting such decisions are not included in the Army’s SOP. Further, the Army’s SOP did not describe or include specific procedures for the preparation of “scorecards” that the Audit Readiness Directorate uses to monitor the status of CAPs. These scorecards provide an overview of the status of CAPs at status meetings and other management oversight meetings. Army officials told us that a scorecard is prepared for the development of each high-priority CAP and indicates both the organization and the individual responsible for developing and implementing the CAP, as well as estimated and actual dates for completion of the CAP. After a CAP has been developed, the Army uses another scorecard to monitor the execution of the CAP. This scorecard includes some of the same information as the development scorecard, but instead of CAP completion dates, it includes estimated and actual dates for the execution and validation of each CAP. According to Army officials, the scorecards are updated every 2 weeks. However, the scorecards do not include any interim milestones or other metrics that could be used to gauge progress or any outcome measures for assessing the effectiveness of corrective actions. These types of metrics are an important tool for monitoring the status of CAPs. Instead of metrics, each scorecard uses color-coded symbols (i.e., red, yellow, blue, and green) to indicate the status of each CAP. For example, to indicate the status of CAP execution, green indicates that a CAP’s execution has been completed, while blue indicates that the completed CAP execution has been validated by an independent party. The SOP states that the Audit Readiness Directorate will use a CAP tracker to track the progress of CAP development and implementation. However, as discussed previously, it does not provide any details about what should be in the CAP tracker. For example, the SOP does not specify which findings and recommendations, and related CAPs, should be included in the tracker. It also does not describe what information should be maintained in the tracker for each CAP. We obtained periodic updates of the CAP tracker and determined that while the tracker contained most of the elements recommended in the Implementation Guide for OMB Circular A-123, it did not include any interim milestones or other metrics that could be used to gauge progress, even though many of the CAPs themselves did. In addition, similar to the CAPs, the CAP tracker did not include any outcome measures for assessing the effectiveness of corrective actions. While the Audit Readiness Directorate has an SOP that has some information about its monitoring procedures, it lacks certain details about procedures that are being performed as well as other monitoring procedures that should be performed. Moreover, the Internal Review Directorate does not have any SOP regarding such procedures. Without complete policies and procedures to describe all aspects of the Army’s process for monitoring the implementation status of CAPs, the Army is at increased risk that information about the status of CAPs will not be adequately documented and monitored. Prior to April 2015, the Navy had a decentralized approach for identifying and tracking its findings and recommendations from audits or examinations, with primary responsibility for these activities assigned to the specific Navy program area under audit. In April 2015, in anticipation of the notices of findings and recommendations that the Navy expected would be issued as a result of the Budgetary Schedule audit, the Navy’s Office of Financial Operations established the Evaluation, Prioritization, and Remediation (EPR) Program. The purpose of this program is to build a centralized capability to manage and track known deficiencies (reported from both internal and external sources) and manage the related remediation process across the Navy. As part of the effort, the Navy drafted several SOPs for implementing its various remediation activities. Although the Navy has not finalized the SOPs related to this process, the draft guidance includes processes to identify and track its financial management-related deficiencies, prioritize its audit findings and recommendations, develop CAPs to address these deficiencies, and monitor and report its findings and recommendations. While the design of these processes is generally consistent with the Implementation Guide for OMB Circular A-123, the Navy’s draft guidance for identifying and tracking findings from external sources such as DOD OIG and GAO and from the Naval Audit Service does not include specific details and procedures for reasonably assuring the (1) completeness of the universe of audit findings and recommendations from these sources and (2) accuracy of the status of these audit findings and recommendations, as discussed in detail below. Navy officials told us that they plan to finalize the draft SOPs by December 31, 2016. The Navy drafted its EPR Program: Deficiency Universe Guide to describe its process to centrally identify and track its financial management-related findings and recommendations from both internal and external sources. This draft guide includes procedures on steps the Navy is taking to gather data from internal and external stakeholders, the data structure used, activities to preserve the monthly integrity of these data, and the reporting requirements for keeping the information updated. The Navy uses a detailed spreadsheet, called a Deficiency Universe Tracker (tracking spreadsheet), to keep track of the over 200 notices of findings resulting from the Budgetary Schedule audit, as well as to monitor the status of the related CAPs. The tracking spreadsheet also tracks deficiencies identified from other sources, such as GAO, DOD OIG, and the Naval Audit Service. During our audit, we analyzed the tracking spreadsheet provided by the Navy in July 2016 and identified differences between the number of audit reports listed and the status of the deficiencies reported by the Navy on its tracking spreadsheet, and with the number of reports listed and the status of the deficiencies identified by GAO, the DOD OIG, and the Naval Audit Service. For example, our analysis of the DOD OIG audit reports listed on the Navy’s July 2016 tracking spreadsheet found that there was a DOD OIG report that was issued in fiscal year 2015 with 5 open financial management-related recommendations that was omitted from the Navy’s tracking spreadsheet. Our analysis of the tracking spreadsheet also found that the Navy was tracking 9 open financial management- related recommendations from three DOD OIG reports that were issued in fiscal year 2015. However, a report we received from the DOD OIG’s recommendation tracking system showed that the DOD OIG was tracking 10 open financial management-related recommendations from these same three reports. With regard to the DOD OIG report that was not entered into the tracking spreadsheet, Navy officials told us that they were focused on addressing the notice of findings and recommendations and did not enter this report. A Navy official also told us that going forward the Navy will review and update the tracking spreadsheet to account for this DOD OIG report and will also review other reports to help ensure the completeness of its tracking spreadsheet. With regard to the status of DOD OIG deficiencies the Navy was tracking, Navy officials told us that they aggregate the recommendations from each DOD OIG report into one data field on their tracking spreadsheet. Therefore, all 3 recommendations from a DOD OIG report will be listed in one data field (not three data fields) on the tracking spreadsheet. This approach makes it difficult to determine the status of each recommendation separately and may result in inaccurate reporting by the Navy on the status of each audit recommendation. Another complication with this approach is that it will be difficult to assess the Navy’s progress in addressing those DOD OIG recommendations that are identified in the DOD OIG report as one recommendation, yet have multiple steps listed within that recommendation. We believe that the omission of the DOD OIG audit report and the inconsistencies of the status of the deficiencies reported by the Navy on its tracking spreadsheet occurred because the Navy’s process for obtaining, consolidating, monitoring, and updating its audit findings from the DOD OIG, which involves conducting a monthly online search of audit reports, is not properly designed. Relying on a monthly online search of DOD OIG audit reports without periodically confirming with the external auditor (the DOD OIG in this example) that the (1) list of reports is complete and (2) status of audit findings and recommendations being tracked is consistent could result in incomplete and inconsistent information being reported on the Navy’s tracking spreadsheet. Furthermore, according to the Navy’s draft guidance, its process for gathering and monitoring audit findings from GAO and the Naval Audit Service is designed using the same approach (i.e., a monthly online search) and could therefore also result in similar incomplete and inconsistent information being reported on the Navy’s tracking spreadsheet. Without detailed guidance and specific procedures in place for confirming and validating the completeness and consistency of the status of the financial management-related deficiencies it is tracking, the Navy is at risk of reporting incomplete and inconsistent information both internally and externally. If the Navy does not identify and track the status of the complete universe of its unresolved deficiencies related to financial management, it cannot provide reasonable assurance that all such deficiencies will be addressed in a timely manner; which can ultimately affect the reliability of its financial information and the auditability of its financial statements. As the Navy progresses toward a full financial statement audit for fiscal year 2018, completely identifying and consistently tracking reported audit results will become even more important, as audit coverage expands and additional deficiencies may be identified. The Navy drafted its EPR Program: Deficiency Prioritization Standard Operating Procedures (SOP) with details on its methodology for prioritizing audit findings and recommendations by weighing various factors, including the deficiency’s source; deficiency type; organizational impact; whether it is a FIAR deal-breaker; whether it is financial or information technology related; whether compensating controls exist; and materiality, or the potential financial statement dollar amount affected by the deficiency. According to the draft SOP, the EPR prioritization team will evaluate each audit finding and recommendation to identify high-priority deficiencies for remediation. As part of this process, the EPR team defines key deficiency data elements, prioritization criteria, methodology, scoring, risk factors, and qualitative considerations to assess what factors to include in the final prioritization determination. Our analysis of the tracking spreadsheet provided in July 2016 showed that the Navy had identified and applied three prioritization categories (high, medium, and low) to 82 percent of the recommendations from other sources that were being tracked. Our review of the template that the Navy is using to develop its CAPs and the information presented on the tracking spreadsheet found that both of these documents included the elements recommended in the Implementation Guide for OMB Circular A-123. The template and tracking spreadsheet both included a description of the deficiency, officials responsible for addressing the deficiency, interim milestones for the various activities and steps to address that deficiency, and validation dates of when the CAP process is complete according to management. Although the template and tracking spreadsheet do not include specific outcome measures for assessing the effectiveness of the corrective actions, we found that the Navy does have a process for developing and reporting outcome measures as outlined in its draft reporting guidance. According to the Navy’s draft EPR Program: Reporting and Reporting Metrics SOP, outcome measures (or operational reporting metrics) will be continuously collected and stored in the program’s EPR Program SharePoint Tool and will be the data source used to create both periodic and ad hoc reports as needed. Testing results and other pertinent data derived from the EPR’s program activities will be collected in a standardized format and stored in the tool. According to the draft reporting SOP, program metrics that are tracked may include, but are not limited to, notice of findings and recommendations response statuses, CAP action results, associated financial impact of deficiencies, and any additional reporting information relevant to effectively presenting the CAPs. This draft SOP also provides details regarding the process for approving any necessary thresholds that reporting metrics or other data should be measured against for report creation purposes. In addition to the draft reporting metric SOP, the Navy has drafted its EPR Program: Corrective Action Plan Process SOP, which describes procedures for adding certain data elements into the CAP template and tracking spreadsheet. This SOP describes how the information flows to and from the offices of primary responsibility or action officers, system owners, and resource managers responsible for the CAP development process. Our review of the design of the Navy’s CAP development process found that it is consistent with guidance in the Implementation Guide for OMB Circular A-123. The Navy’s approach to CAP development involves a phased approach (i.e., planning and design, implementation, validation, and closeout); a description of the root cause analysis; an analysis of alternative CAPs; whether system or process changes are required; whether policy updates are necessary; and whether additional resources are needed to implement the CAPs. If effectively implemented, the Navy’s design for developing CAPs should provide it with information on the needed steps or activities for addressing identified deficiencies. We found that the design for monitoring the status of corrective action implementation at the Navy involves several layers of management oversight and collaboration and various reporting activities. According to its draft EPR Guidebook, one of the Navy’s objectives for centralizing its remediation efforts and establishing the EPR Program is to facilitate a culture of collaboration, integration, and accountability across the Navy in support of its auditability milestones. To this end, the Navy has established its Senior Management Council at the most senior Navy management level comprising senior executive civilians and flag officers. According to its charter, the council is responsible for monitoring, assessing, and reporting to the Assistant Secretary of the Navy, Financial Management and Comptroller on the implementation of corrective actions to ensure that they are accurate and timely, and reporting the results of CAP implementation. At a minimum, this council meets quarterly each year, in part to review the Navy’s progress in correcting previously identified material weaknesses. In addition to periodic monitoring at the Navy’s senior management level, the Navy has identified other key personnel in its Office of Financial Operations who play critical roles in ongoing monitoring of its remediation efforts. These key personnel are the office of primary responsibility and the responsible action officer. Office of primary responsibility. The office of primary responsibility is a senior executive civilian or flag officer responsible for facilitating the collaboration and communication necessary for the engagement of senior leaders and major stakeholders supporting CAP development and resource identification as needed. This person is the primary point of contact and driver for remediating a deficiency and should have the knowledge to assess the root cause of the deficiency. Action officer. The action officer is generally more at the working level and is responsible for hands-on remediation of a deficiency, and should have a strong understanding of the mission needs and capabilities. This officer’s tasks include conducting a root cause analysis of the deficiency, implementing and sustaining the CAPs, and reporting on the CAPs’ status. To facilitate collaboration for addressing its remediation activities, the Navy also holds recurring meetings attended by the EPR Program team and Navy senior leadership to discuss CAP progress and milestones accomplished. According to Navy officials, the purpose of these meetings is to help drive accountability for CAPs, address potential challenges, and increase transparency in the EPR process. Navy officials told us that the EPR team meets regularly to assess prioritized deficiencies for action, discuss updates on CAP progress, and identify risks as reported by the office of primary responsibility. According to Navy officials, monthly meetings are also held with the Assistant Secretary of the Navy, Financial Management and Comptroller, and the Navy’s Office of Financial Operations senior management to review CAP status, approve CAP implementation, assess identified risks, and evaluate CAP mitigation plans as needed. As discussed earlier, to provide guidance on the Navy’s reporting activities on the status of its CAPs, the Navy drafted an EPR Program: Reporting and Reporting Metrics SOP. This SOP establishes guidelines, policies, and procedures for the EPR Program to follow when creating reports and reporting metrics across program activities. According to this draft SOP, performing these procedures will help provide a high level of data integrity and minimize gaps of information across the Navy. The draft SOP also states that all of the aggregated CAP, deficiency, and findings and recommendations data will be stored in the EPR Program SharePoint Tool. This tool will serve as the source for all reporting and reporting metrics required by the EPR Program and will assist the EPR program manager in developing accurate reports and reporting metrics, when needed. The Navy also drafted the EPR Program: CAP Validation SOP that establishes guidelines, policies, and procedures to validate CAPs and deficiencies through testing to determine if deficiencies have been remediated and the corrective actions are operating effectively. This SOP provides details on the CAP validation’s objectives, roles and responsibilities, and procedures. Results from the CAP validation process are compiled and reported in a CAP Testing Results Report and reported to the Navy’s Office of Financial Operations. This report includes the testing scope, pass rates, and recommendations for improvement. The Navy’s draft guidance provides details on the roles and responsibilities, level of collaboration needed, and reporting framework that if implemented effectively should allow Navy management to sufficiently monitor implementation progress of its CAPs. The Air Force did not have a comprehensive process in place to identify and track all financial management-related findings and recommendations. The Air Force had written policies and procedures providing limited information for selected recommendations, such as those from the Air Force Audit Agency (AFAA), but the policies did not discuss the development of CAPs. The Air Force’s written policies and procedures did not provide specific details to support how (1) recommendations from all audit sources are identified and prioritized for tracking purposes, (2) CAPs are developed for these recommendations, and (3) CAPs are implemented and monitored. The Air Force provided guidance in the June 2016 CAPs Process Guide describing the Air Force’s process for developing and implementing CAPs to remediate self-identified deficiencies and its over 200 findings and recommendations issued by the IPA during the fiscal year 2015 Budgetary Schedule audit. This guidance describes at a high level the CAP development and implementation process, responsible organizations/stakeholders, and the steps required of each responsible party. However, even this guidance did not provide the information to resolve all the limitations we identified. The Air Force did not design comprehensive policies and procedures for how it identifies and tracks all of its financial management-related findings and recommendations. While the Air Force had developed a mechanism for identifying and tracking the findings and recommendations from AFAA, self-identified deficiencies, and its fiscal year 2015 Budgetary Schedule audit, it had not established a similar process for identifying and tracking most other findings and recommendations from other audits, such as those conducted by the DOD OIG and GAO. Therefore, the Air Force did not track the complete universe of its financial management-related findings and recommendations from all sources. Moreover, the Air Force had policies and procedures for following up on DOD OIG and GAO reports, but these policies and procedures—which are described in Air Force Instructions 65-402, Financial Management: Relations with the Department of Defense, Office of the Assistant Inspector Generals for Auditing and Analysis and Follow Up, and 65- 401, Financial Management: Relations with the Government Accountability Office—primarily discussed the audit report processes and applicable roles and responsibilities, but did not assign the tracking of the DOD OIG and GAO audit findings and recommendations to anyone in the Air Force. Instead, the Air Force relies on the DOD OIG to track and follow up on any Air Force-related DOD OIG and GAO findings and recommendations. As noted previously, in accordance with the Implementation Guide for OMB Circular A-123, the Air Force is responsible for tracking and addressing any recommendations that the DOD OIG or GAO made to it. Unless the Air Force actively tracks findings and recommendations and reasonably assures that it has a complete universe of findings and recommendations from all audit sources and has assigned responsibility for addressing them, it increases the risk that it may not adequately address and remediate all deficiencies that could hinder it from becoming fully audit ready. Without specific procedures in place for identifying and tracking financial management-related audit findings and recommendations from all sources, the Air Force is at risk of not addressing the complete universe of its findings and recommendations. If the Air Force does not identify and track the complete universe of its unresolved deficiencies related to financial management, it cannot provide reasonable assurance that all such deficiencies will be addressed in a timely manner, which can ultimately affect the reliability of its financial information and the auditability of its financial statements. The Air Force policies and procedures for prioritizing its financial management-related audit findings and recommendations do not reasonably assure that all findings and recommendations are appropriately prioritized. This is because the CAPs Process Guide includes limited criteria for prioritizing Budgetary Schedule audit findings and self-identified deficiencies. The guidance also describes different priority designations (Priorities 1, 2, and 3) for the Budgetary Schedule audit findings and recommendations and self-identified financial management-related deficiencies. However, the guidance is silent on other financial management-related deficiencies from other sources, such as AFAA, DOD OIG, and GAO. Additionally, the Air Force guidance does not specify prioritizing recommendations based upon financial management impact but rather upon the source of the finding. For example, a self-identified deficiency is assigned a Priority 1 if the finding and recommendation align with an Air Force team working in specific areas. The Implementation Guide for OMB Circular A-123 and FIAR Guidance state that the extent to which corrective actions are tracked by the agency should be commensurate with the severity of the deficiency. Given the significant number of findings and recommendations that the Air Force has received from the various audit entities, it is important for the Air Force to appropriately prioritize them to help achieve audit readiness. The Air Force has policies and procedures for following up on AFAA, DOD OIG, and GAO reports that are set forth in Air Force Instructions 65- 403, Financial Management: Followup on Internal Air Force Audit Reports, and Instructions 65-402 and 65-401, but these also do not describe prioritization. Without sufficiently detailed policies and procedures for identifying and prioritizing audit findings, the Air Force is at increased risk of not addressing its most critical financial management weaknesses and achieving audit readiness. The Air Force does not have detailed policies and procedures for developing CAPs for its self-identified financial management-related deficiencies and financial management-related audit findings and recommendations issued by the 2015 Budgetary Schedule audit IPA, AFAA, the DOD OIG, and GAO. The CAPs Process Guide describes the Air Force’s process for developing CAPs to remediate both self-identified deficiencies and audit findings and recommendations issued during the fiscal year 2015 Budgetary Schedule audit. The guidance describes at a high level the process for developing CAPs and includes root causes, milestones, validation, and testing, but the guidance is not sufficiently detailed to include all of the elements recommended by the Implementation Guide for OMB Circular A-123. For example, while there are several start and end dates for interim milestones within the CAPs guidance, the guidance does not include a provision for the CAPs to include the targeted corrective action dates (the dates of management follow-up). Also, the guidance does not discuss outcome measures for assessing the effectiveness of the corrective actions. During our audit, we reviewed a status summary of the CAPs for the fiscal year 2015 Budgetary Schedule financial-related findings and recommendations and found that the CAPs did not include all elements recommended by the Implementation Guide for OMB Circular A-123, such as the targeted corrective action dates (the dates of management follow-up) or outcome measures. The Air Force also has limited policies and procedures for following up on audit findings and recommendations from AFAA, the DOD OIG, and GAO, which are described in Air Force Instructions 65-403, 65-402, and 65-401. However, these documents do not provide detailed guidance on developing CAPs. While the Air Force has an Audit Recommendation Implementation Tracking policy memo and attached Air Force Audit Implementation Plan Guidance that pertains to AFAA recommendations, these documents only describe the process at a high level and do not provide specific details as to how CAPs for these audit findings and recommendations would be developed and monitored. Without complete and sufficiently detailed policies and procedures for designing CAPs, the Air Force is at increased risk of developing CAPs that do not include all elements necessary for ensuring accountability and for effectively monitoring the development and implementation of CAPs. Also, without CAP information on AFAA, DOD OIG, and GAO financial management-related findings and recommendations, it is not clear how Air Force management can assess the progress being made toward achieving audit readiness. The Air Force does not have detailed policies and procedures for monitoring progress against the CAPs for its financial management- related audit findings and recommendations. The CAPs Process Guide describes at a high level the Air Force’s process for monitoring and reporting on the CAPs’ implementation to remediate both self-identified deficiencies and audit findings and recommendations issued during the fiscal year 2015 Budgetary Schedule audit. The guidance provides a brief description of the monitoring to be performed by the responsible organization, but the guidance is not complete. For example, the guidance describes that regular updates on progress of the CAPs should be provided to the Air Force financial management CAP lead by the office of primary responsibility. However, as recommended by the Implementation Guide for OMB Circular A-123, the guidance does not discuss specific details, such as the role of the Air Force’s Senior Assessment Team (Air Force Financial Improvement Executive Steering Committee) in resolving deficiencies, setting targeted corrective action dates, and determining outcome measures. Without having complete policies and procedures to describe all aspects of the Air Force’s CAP monitoring process, the Air Force is at increased risk that CAP implementation will not be adequately monitored; this could result in some CAPs not receiving adequate attention and therefore not being implemented in a timely manner, which could negatively affect the Air Force’s efforts to achieve audit readiness. Although the DOD Comptroller’s office has established several elements of a department-wide audit readiness remediation process, it does not have the comprehensive information on the status of all CAPs throughout the department needed to fully monitor and report on the progress being made to resolve financial management-related deficiencies that preclude DOD from being auditable. Specifically, (1) the DOD Comptroller’s office is not able to fully assess the military services’ progress because it does not obtain complete, detailed information on all of their CAPs related to critical capabilities—those identified in the FIAR Guidance as necessary to achieve auditability—and (2) reports to internal stakeholders and external stakeholders, such as DOD OIG, OMB, GAO, and Congress, on the status of audit readiness do not provide comprehensive information. As a consequence, the DOD Comptroller’s office is unable to provide and prepare department-wide, comprehensive information on the status of all CAPs related to critical capabilities that are necessary to achieve auditability. Having comprehensive DOD-wide information on the CAPs related to critical capabilities will take on additional importance as DOD moves forward with audits that have broader scopes beyond its budgetary statements, potentially leading to other findings being identified. The Implementation Guide for OMB Circular A-123 states that the senior management council, or similar forums, has ownership and accountability for resolving deficiencies. Such forums should use CAPs as a road map for discussion as well as in determining when sufficient action has been taken to declare that a deficiency has been corrected. The Implementation Guide for OMB Circular A-123 also recommends that a senior assessment team communicate the status of CAPs developed for material weaknesses to the senior management council on a regular basis. The senior management council should be responsible for determining if the progress is sufficient or if additional action must be taken to expedite the remediation process. DOD Comptroller officials told us that the FIAR Governance Board is considered DOD’s equivalent of a senior management council as described in the Implementation Guide for OMB Circular A-123. Further, the Implementation Guide for OMB Circular A-123 recommends that an agency establish a CAP framework to facilitate stakeholder oversight and ensure accountability for results. For example, the agency could prepare a CAP management summary that is made available to external stakeholders and to the agency senior management council. In addition, the April 2016 FIAR Guidance identifies the following seven capabilities as being critical to achieving audit readiness and therefore the highest priorities for corrective actions: (1) produce a universe of transactions; (2) reconcile its Fund Balance with Treasury; (3) provide supporting documentation for adjustments to its financial records; (4) validate the existence, completeness, rights, and obligations of its assets; (5) establish processes to manage and value its assets correctly; (6) establish an auditable process for estimating and recording its environmental and disposal liabilities; and (7) implement critical information technology controls for its financial systems. DOD Comptroller officials told us that as part of their process to update the FIAR Guidance, generally about once a year, they also consider whether any additional critical capabilities should be identified in the guidance. Because the critical capabilities are the DOD Comptroller’s highest priorities for corrective actions, it is important that they are updated to reflect the audit findings that result from DOD’s audit readiness efforts. For example, one of the seven critical capabilities currently addresses the need for supporting documentation for adjustments to financial records. However, as stated previously, the IPAs for all three military services noted the lack of adequate supporting documentation for transactions, not just adjustments, as a reportable finding on their Budgetary Schedule audits, which also contributed to each IPA’s disclaimer of opinion. Appropriate documentation for financial transactions allows the military services to support financial statement line items and allows auditors to test the line items. Expanding the critical capabilities to reflect significant audit findings will increase the ability of the DOD Comptroller to monitor and report on the CAPs most likely to affect audit readiness. DOD has designed a department-wide process to monitor and report on the department’s audit readiness and remediation efforts. According to DOD Comptroller officials, the department-wide process consists primarily of the following procedures: (1) obtaining periodic updates on summary- level interim milestones and percentages of completion for major processes related to critical capabilities (but not for specific CAPs related to these critical capabilities), (2) attending various meetings with the military services and other defense organizations to discuss and obtain high-level updates on the status of CAPs and other audit readiness efforts, (3) obtaining and tracking the status of all findings and recommendations from service providers and other defense organizations, and (4) reporting the status of audit readiness efforts to Congress and other stakeholders in semiannual FIAR Plan Status Reports. According to DOD Comptroller officials, the offices of the DCMO and the DOD Comptroller are the main parties at the department level that are involved in monitoring financial management improvement and audit readiness efforts throughout DOD. To monitor the status of the military services’ remediation efforts, officials from both of these offices attend bimonthly (i.e., every other month) FIAR Governance Board meetings to discuss and obtain high-level updates on the status of military services’ CAPs and other audit readiness efforts. In addition, DOD Comptroller officials attend other, more frequent meetings (e.g., audit committee) with the military services regarding their audit readiness efforts. During these meetings, DOD officials receive briefing slides that provide high-level summaries of the military services’ audit readiness, including some information on the status of the CAPs relating to the Budgetary Schedule audits that each service, using its respective criteria, has determined to be high priority. DOD Comptroller officials told us that they compile some of this summary-level information, such as the percentages of completion for critical capabilities by various DOD components, based on more detailed information that they receive from the components. However, even this more detailed information is based on some summarization of tasks that are part of the critical capabilities, as well as the percentages of completion for these tasks. Except for certain findings and recommendations related to DOD-wide issues, DOD Comptroller officials said that they do not receive detailed information on individual findings, recommendations, and CAPs from the military services. Although the DOD Comptroller developed a Notice of Findings and Recommendations Tracking Tool, it is used primarily to track the audit readiness findings and recommendations for the other defense organizations and findings and recommendations for the military services that pertain to DOD-wide issues that require action at the department level. DOD Comptroller officials said that because of resource constraints, they do not plan to obtain or track the detailed status of most of the military services’ findings, recommendations, and CAPs, but instead will rely on the information provided for various oversight meetings to monitor the status of audit readiness efforts. The officials also explained that they have obtained some detailed findings and recommendations related to DOD-wide issues that they are using to help develop policies to address these issues. As new policies are issued, then the military services will likely need to develop CAPs to implement the new policies. As described above, the DOD Comptroller receives some details about the status of the seven critical capabilities at the military services. However, without obtaining all of the military services’ findings, recommendations, and related CAPs with consistent data elements pertaining to the seven critical capabilities, the DOD Comptroller’s office and other DOD officials could be limited in their ability to adequately monitor these issues. Further, without detailed, consistent, and timely updates (e.g., updates for the bimonthly FIAR Governance Board meetings) from the military services on the status of CAPs related to the critical capabilities, DOD officials will not have the consistent, readily available information needed to effectively monitor and report the status of these CAPs. Having consistent, readily available information on these types of CAPs will take on additional importance as DOD moves forward with more audits beyond its Budgetary Schedules, potentially leading to more findings being identified. DOD is also responsible for reporting the status of audit readiness and remediation efforts to its internal stakeholders and its external stakeholders, such as the DOD OIG, OMB, GAO, and Congress. For example, the 2010 NDAA required that DOD provide recurring, semiannual reports by May 15 and November 15 on the status of the department’s implementation of the FIAR Plan to congressional defense committees. Federal internal control standards also require that an entity’s management communicate with and obtain quality information from external parties through established reporting lines. In these communications, management should include information relating to the entity’s activities that affect the internal control system. DOD’s FIAR Plan Status Reports are the department’s primary mechanism for reporting the status of its audit readiness, including audit remediation efforts, to Congress and other external parties. The reports include narratives on each military service’s audit readiness efforts, focusing on material weaknesses and the seven critical capabilities necessary to achieve audit readiness, and estimated completion dates for each of these areas. Over the years, as the military services have set target dates for asserting audit readiness, the target dates have been delayed, which we have reported increases the risk that DOD may not complete its audit readiness efforts within planned time frames, thereby affecting DOD’s ability to meet the statutory requirements. For example, we recently reported that the military services’ target dates for asserting audit readiness for real property were delayed by at least 2 years. In addition, we have reported that the military services’ SBR milestone dates had been delayed. Further, although the FIAR Plan Status Reports appropriately provide summary-level information, they do not provide the detailed information that might be needed by some stakeholders. For example, even though the reports contain summary-level information regarding milestones for the seven critical capabilities, they do not include the detailed actions and interim milestones for the CAPs related to critical capabilities, which could provide a more complete picture of the status of corrective actions. The Implementation Guide for OMB Circular A-123 recommends that such detailed information about CAPs be provided to external stakeholders upon request. However, if DOD does not routinely obtain consistent and detailed information from the military services on the status of their CAPs, it cannot readily provide this type of information to stakeholders when requested and must instead rely on inefficient methods, such as data calls to collect detailed information for stakeholders. As part of our past audits of the consolidated financial statements of the U.S. government, we have observed significant progress being made in improving federal financial management government-wide. For example, nearly all of the 24 Chief Financial Officers (CFO) Act agencies received unmodified or “clean” opinions on their fiscal year 2015 financial statements, up from 6 CFO Act agencies that received clean audit opinions in 1996 when the CFO Act agencies were first required to prepare audited financial statements. For example, the Department of Homeland Security (DHS) was able to overcome its numerous material weaknesses that had prevented its financial statements from being auditable. Since its creation in fiscal year 2003 when 22 separate agencies were brought together to form the new department, DHS was able to move from receiving disclaimers of opinion on its financial statements to first achieving an unqualified or “clean” opinion on all of its financial statements for fiscal year 2013, which has continued through fiscal year 2016. DHS policy on corrective action plans, issued in fiscal year 2012, indicates that its process included maintaining a department- wide accumulation of significant CAPs from all of its components; using standard data elements; and providing monthly updates on the status of CAPs to the CFO, and updates on an as-needed basis to the OIG and the Secretary. As we have previously reported, many of the planned audit readiness actions and milestones reported by DOD and its components in FIAR Plan Status Reports have not been realized. Given the short amount of time remaining before the statutory deadline of March 31, 2019, for submitting to Congress the results of an audit of its fiscal year 2018 financial statements, having complete, reliable, and detailed information on the department-wide status of CAPs related to critical capabilities is essential for DOD and its stakeholders so that they can measure and communicate DOD’s progress in addressing the financial management deficiencies and determine if additional actions are necessary to expedite the remediation process. DOD is moving forward with its efforts to achieve the goal of being audit ready by September 30, 2017, and as part of this effort, had the Army, Navy, and Air Force undergo their first audits of their Schedules of Budgetary Activity for fiscal year 2015. The IPAs for all three military services issued disclaimers of opinion on the respective services’ fiscal year 2015 Schedules of Budgetary Activity and identified material weaknesses in internal control. In response, the Army, Navy, and Air Force have all begun taking significant steps toward resolving their material weaknesses, such as developing and implementing CAPs to address the IPAs’ recommendations. However, none of the military services have complete policies and procedures to identify and sufficiently track all of their financial management-related findings and recommendations reported by audits, and only the Navy has drafted policies and procedures that effectively prioritize, develop, and monitor the status of its CAPs’ implementation. The DOD Comptroller and DCMO have also begun to take a more active role in monitoring the status of the military services’ CAPs to ensure that adequate progress is being made. However, DOD’s process for monitoring and reporting on its audit remediation efforts lacks some of the information recommended by the Implementation Guide for OMB Circular A-123. Specifically, DOD does not obtain comprehensive information from the military services on the status of their CAPs, such as interim milestones, completion dates, and other indicators or targets that facilitate management’s ability to fully determine how the resolution of CAPs is progressing. This type of detailed information is critical for DOD management and its external stakeholders to evaluate the progress that DOD military services are making in correcting the deficiencies that are preventing the department from obtaining an audit opinion on its financial statements. The lack of comprehensive information on the status of CAPs increases DOD’s risk that it will not be able to fully, timely, and efficiently correct its long- standing deficiencies. To improve processes for identifying, tracking, remediating, and monitoring financial management-related audit findings and recommendations, we are making the following eight recommendations. 1. We recommend that the Secretary of the Army direct the Internal Review Directorate under the Assistant Secretary of the Army, Financial Management and Comptroller, to develop written policies and procedures for all financial management-related audit findings and recommendations under its purview that include the following: how the status of the recommendations will be tracked; the process and criteria to be followed for prioritizing the findings and recommendations; the process for developing CAPs to remediate the findings and recommendations, including the detailed CAP elements recommended by the Implementation Guide for OMB Circular A- 123; and the process for monitoring the status and progress of the CAPs, including the documentation to be maintained for monitoring CAP status and any actions to be taken if a lack of progress is found. 2. We recommend that the Secretary of the Army direct the Accountability and Audit Readiness Directorate under the Assistant Secretary of the Army, Financial Management and Comptroller, to enhance the directorate’s policies and procedures for (1) tracking and prioritizing all financial management-related audit findings and recommendations under its purview and (2) developing and monitoring CAPs for all such recommendations so that they include sufficient details, such as the criteria used to prioritize the CAPs, the recommended CAP elements, and the process for monitoring and documenting the progress and status of CAPs. 3. We recommend that the Secretary of the Navy, when finalizing the Navy’s policies and procedures for identifying and tracking its CAPs to remediate financial management-related audit findings and recommendations, enhance this guidance so it includes detailed steps and specific procedures for confirming and validating the completeness and accuracy of the status of these audit findings and recommendations. 4. We recommend that the Secretary of the Air Force design and document a comprehensive process to ensure that the complete universe of all financial management-related findings and recommendations from all audit sources is identified and tracked. 5. We recommend that the Secretary of the Air Force update the Air Force’s written policies and procedures for prioritizing financial management-related audit findings and recommendations from all audit sources and for developing and monitoring CAPs so that they include sufficient details. These procedures should include the following details: The process to be followed for prioritizing the financial management-related findings and recommendations from audit sources. The guidance for developing CAPs for all financial management- related audit findings and recommendations from all audit sources to include complete details, including the elements recommended by the Implementation Guide for OMB Circular A-123. The process for monitoring the status of the CAPs for all financial management-related audit findings and recommendations from all audit sources, including the documentation to support any corrective actions taken, as recommended by the Implementation Guide for OMB Circular A-123. 6. To improve DOD management’s process for monitoring the military services’ audit remediation efforts and to provide timely and useful information to stakeholders as needed, we recommend that the Secretary of Defense direct the Secretary of the Army, the Secretary of the Navy, and the Secretary of the Air Force to prepare and submit to the Under Secretary of Defense (Comptroller), on at least a bimonthly basis for availability at the FIAR Governance Board meetings, a summary of key information included in the CAPs that at a minimum contains the data elements recommended by the Implementation Guide for OMB Circular A-123 for each CAP related to critical capabilities for achieving audit readiness. 7. To reasonably assure that DOD management and external stakeholders have a comprehensive picture of the status of corrective actions needed for audit readiness throughout the department, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to prepare a consolidated CAP management summary on a bimonthly basis that includes the data elements referred to above on the status of all CAPs related to critical capabilities for the military services and for the service providers and other defense organizations. 8. To facilitate the development of a consolidated CAP management summary and the ability to efficiently respond to stakeholder requests, we recommend that the Under Secretary of Defense (Comptroller) develop and implement a centralized monitoring and reporting process that at a minimum (1) captures department-wide information on the military services’ and other defense organizations’ CAPs related to critical capabilities, including the standard data elements recommended in the Implementation Guide for OMB Circular A-123, and (2) maintains up-to-date information on the status of these CAPs. We provided a draft of this report to DOD and the military services for review and comment. In their written comments, reprinted in appendix IV, the Army, Navy, and Air Force concurred with our respective recommendations to them, while DOD concurred with one recommendation and partially concurred with two other recommendations that we made to it. The Army concurred with our recommendation that its Internal Review Directorate develop written policies and procedures for all financial management-related audit findings and recommendations under its purview. The Army stated that the Internal Review Directorate has completed updating its policies and procedures to include how the status of findings and recommendations will be tracked and prioritized as well as how CAPs will be developed and monitored. The Army also concurred with our recommendation that its Accountability and Audit Readiness Directorate enhance its policies and procedures for (1) tracking and prioritizing all financial management-related audit findings and recommendations under its purview and (2) developing and monitoring CAPs for all such recommendations so that they include sufficient details, such as the criteria used to prioritize the CAPs, the recommended CAP elements, and the process for monitoring and documenting the progress and status of CAPs. The Army stated that the Accountability and Audit Readiness Directorate has completed actions to enhance its current standard operating procedures to include (1) updating its CAP database and reporting tool, (2) documenting its reporting procedures, and (3) updating its CAP template to include additional elements recommended by the Implementation Guide for OMB Circular A-123. In addition, the Army stated that its policies and procedures include steps to incorporate external financial management-related audit findings assigned to the Accountability and Audit Readiness Directorate by the Internal Review Directorate and that the existing process the Army uses to prioritize findings and the related CAPs and to monitor the progress and status of CAPs has been documented. The Navy concurred with our recommendation to enhance its guidance to include detailed steps and specific procedures for confirming and validating the completeness and accuracy of the status of financial management-related audit findings and recommendations. The Navy stated that it is (1) recording new findings and recommendations on a weekly basis in its deficiency database, (2) reviewing historical audits to ensure that previous findings and recommendations are recorded, and (3) collaborating with audit agencies to establish a process to reconcile the status of recommendations to ensure that its deficiency database accurately reports open and closed recommendations. The Navy also stated that these processes would be documented and implemented by January 31, 2017. The Air Force concurred with our recommendation that the Air Force design and document a comprehensive process to ensure that the complete universe of all financial management-related findings and recommendations from all audit sources is identified and tracked. The Air Force described planned actions that it will take to address the recommendation, including revising the existing process for identifying and tracking all financial management-related findings and recommendations from all audit sources and coordinating with all stakeholders. The Air Force plans to implement this recommendation by January 31, 2018. The Air Force also concurred with our recommendation to update its written policies and procedures for prioritizing financial management-related audit findings and recommendations from all audit sources, and for developing and monitoring CAPs so that they include sufficient details. The Air Force stated that it will revise its existing written policies and procedures to include (1) prioritizing findings and recommendations and (2) providing guidance for developing detailed and actionable CAPs and for monitoring the status and progress toward implementing and closing the CAPs, as recommended by the Implementation Guide for OMB Circular A-123. The Air Force plans to implement this recommendation by January 31, 2019. However, the Air Force’s planned implementation dates indicate that the changes to policies and procedures will not be in place before fiscal year 2018, the period that the department-wide financial statements will be under audit. DOD concurred with our recommendation that the department direct the military services to prepare and submit, on at least a bimonthly basis, summaries of key information included in their CAPs that include, for each CAP related to critical capabilities, the data elements recommended by the Implementation Guide for OMB Circular A-123. Specifically, DOD stated that it is updating its template for the military services to use for reporting this information so that it will include the recommended standard data elements. In addition, it stated that the FIAR Guidance will be updated to explicitly state that the military services should include these data elements in their CAPs. DOD partially concurred with our recommendation that the department direct the DOD Comptroller to prepare a bimonthly consolidated CAP management summary that includes the data elements outlined in the Implementation Guide for OMB Circular A-123 for all CAPs related to critical capabilities for the military services as well as for the service providers and other defense organizations. According to DOD, and as we stated in our report, the military services already provide summary-level updates on their critical capability CAPs at FIAR Governance Board meetings. It also stated that the template that is used to present CAPs to the FIAR Governance Board meetings at the summary level has been updated to align CAPs to critical capabilities. However, DOD’s response, while reiterating what is already being reported, does not address how all of the data elements from the Implementation Guide for OMB Circular A- 123 will be summarized or otherwise reported for all CAPs pertaining to critical capabilities across the department, as we recommended. In addition, DOD stated that because the DOD Comptroller takes responsibility for maintaining, monitoring, and reporting on the status of CAPs for the service providers and other defense organizations and of DOD-wide issues, the Comptroller will also summarize this information. However, it does not provide any further details about what information will be summarized, and as we note in the report, the Comptroller’s tracking does not include CAPs for the military services. Without developing a consolidated department-wide summary of CAPs, DOD will continue to lack a department-wide view of all CAPs pertaining to each critical capability. Therefore, we continue to believe that DOD needs to take actions to fully implement this recommendation. DOD also partially concurred with our recommendation for the DOD Comptroller to develop and implement a centralized monitoring and reporting process that captures and maintains up-to-date information, including the standard data elements recommended in the Implementation Guide for OMB Circular A-123, for all CAPs department- wide that pertain to critical capabilities. In its response, DOD said that as outlined in the military services’ responses to our recommendations directed to them, the Army, Navy, and Air Force have agreed to take the responsibility for developing, maintaining, and monitoring all CAPs at the level recommended by the Implementation Guide for OMB Circular A- 123. Further, DOD stated that the information reported at FIAR Governance Board meetings, along with the CAP information maintained by the DOD Comptroller, provides the department the ability to efficiently respond to stakeholder requests for CAPs related to critical capabilities. As noted above, we acknowledge the important steps the military services have planned or taken to address our recommendations and improve their CAP monitoring processes. However, DOD’s actions do not address our recommendation to develop a centralized reporting process to capture department-wide information on the military services’ and other defense organizations’ CAPs related to critical capabilities. As stated in our report, DOD does not routinely obtain consistent and detailed information from the military services on the status of their CAPs, and without such it cannot readily provide this type of information to stakeholders when requested and must rely on inefficient methods, such as data calls to collect the detailed information. In addition, many of the planned audit readiness actions and milestones reported by DOD and its components in the FIAR Plan Status Reports have not been realized. Given the short amount of time remaining before the statutory date of March 31, 2019, for submitting to Congress the results of an audit of DOD’s fiscal year 2018 financial statements, having complete and reliable, detailed information on the department-wide status of CAPs related to critical capabilities is essential for DOD and its stakeholders so that they can (1) measure and communicate DOD’s progress in addressing the financial management deficiencies and (2) determine if additional actions are necessary to expedite the remediation process. This type of detailed information is critical for DOD management and its external stakeholders to evaluate the military services’ progress in correcting the deficiencies that are preventing the department from obtaining an audit opinion on its financial statements. Moreover, the lack of comprehensive information on the status of CAPs increases DOD’s risk that it will not be able to fully, timely, and efficiently correct its long- standing deficiencies. Therefore, we continue to believe that DOD needs to take action to fully implement this recommendation. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Comptroller)/Chief Financial Officer; the Deputy Chief Financial Officer; the Director, Financial Improvement and Audit Readiness; the Secretary of the Army; the Secretary of the Navy; the Secretary of the Air Force; the Director of the Office of Management and Budget; and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) report the results of the audits of the fiscal year 2015 Schedules of Budgetary Activity (Budgetary Schedule) for the military services, (2) determine the extent to which each military service designed a process to address identified financial management-related findings and recommendations, and (3) determine the extent to which the Department of Defense (DOD) has designed a department-wide strategy to monitor and report on audit readiness remediation efforts. To address our first objective, we monitored the audit work of the military services’ independent public accountants (IPA) by attending status meetings, participating in site visits, and coordinating with DOD Office of Inspector General (OIG) officials to discuss the audits’ progress and challenges resulting from these first-year audits. We also reviewed documentation of the audit work conducted by the IPAs. We reviewed documentation related to planning, internal controls, testing, and reporting on the audits. This included sampling plans, risk assessments, test plans and summaries, audit opinions, and reports on internal control and compliance with laws and regulations. In addition, we reviewed the audit reports on the Army, Navy, and Air Force Budgetary Schedules along with other audit reports addressed to Army, Navy, and Air Force management that detailed audit findings and recommendations and the services’ responses to these audit reports. We reviewed the DOD OIG audit contracts (and contract modifications) for the Army, Navy, and Air Force and the related statements of work. We also reviewed the management representation letters from each military service, which contained management’s assertions about the reliability of its financial reporting in accordance with generally accepted accounting principles, as it related to the Fiscal Year 2015 Budgetary Schedules. To address our second objective, we obtained information on the IPAs’ findings and recommendations from the Budgetary Schedule audits as well as other existing open recommendations or findings from other sources related to financial management at the Army, Navy, and Air Force. We met with applicable military service personnel to determine what policies and procedures were designed to (1) identify and track open findings and recommendations from all sources; (2) prioritize open findings and recommendations by risk or other factors, such as audit impediments identified in the Financial Improvement and Audit Readiness guidance; (3) develop corrective action plans (CAP) to remediate findings and recommendations; and (4) monitor the status of the CAPs’ implementation to confirm that deficiencies were remediated. We compared the military services’ policies and procedures with guidance for CAPs in the Implementation Guide for OMB Circular A-123, Management’s Responsibility for Internal Control, appendix A, “Internal Control over Financial Reporting.” We also reviewed relevant documentation pertaining to how the military services were carrying out the aforementioned procedures. For the third objective, we held discussions with officials from the office of the DOD Deputy Chief Management Officer (DCMO), the military services’ respective DCMOs, and officials from the Office of the Under Secretary of Defense (Comptroller) to determine what department-wide strategy has been designed to monitor the military services’ development and implementation of CAPs, and what their roles and responsibilities were with respect to CAP oversight. We also reviewed DOD policies, procedures, and DOD management documentation to gain an understanding of how DOD management monitors the military services’ audit remediation activities. We conducted this audit from March 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The fiscal year 2015 Schedule of Budgetary Activity (Budgetary Schedule) is an interim, military service-level special report intended to provide a building block toward an audit-ready Statement of Budgetary Resources (SBR) through audits of consecutive fiscal year schedules of budgetary activity. The Budgetary Schedule, like the SBR, is designed to provide information on budgeted spending authority as outlined in the President’s Budget, including budgetary resources, availability of budgetary resources, and how obligated resources have been used. However, instead of covering the full range of SBR activity on current and expired appropriations that have not been canceled, the first-year Budgetary Schedule covers only activity for current fiscal year appropriations. Subsequent fiscal year Budgetary Schedules would include activity for subsequent years’ appropriations, building toward an SBR. For example, in the second year, the fiscal year 2016 Budgetary Schedule would include fiscal year 2016 budgetary activity related to fiscal year 2015 and 2016 appropriations. In making the shift to focus on audit readiness for a Budgetary Schedule, Department of Defense officials concluded—based on the difficulties encountered in obtaining documentation for prior-year transactions on the Marine Corps’ SBR audit—that the most effective path to an audit of the SBR would be to start with reporting and auditing only current-year activity for fiscal year 2015 appropriations and to expand subsequent audits to include current-year appropriations and prior appropriations going back to fiscal year 2015. Both the SBR and the Budgetary Schedule consist of four separate but related sections that provide information about budgetary resources, the status of budgetary resources, changes in obligated balances, and outlays for major budgetary accounts. Budgetary resources. This section of a first-year Budgetary Schedule shows total budgetary resources made available to the agency for obligation during the current fiscal year only. It consists of new budget authority, reimbursements, and other income. The first- year Budgetary Schedule does not include amounts from prior periods, commonly referred to as beginning balances. In contrast, the SBR includes amounts available from prior reporting periods; transfers available from prior-year balances; and adjustments, such as recoveries of prior-year obligations. In addition, the SBR includes all other information provided in this section of the Budgetary Schedule. Status of budgetary resources. This section of the Budgetary Schedule and the SBR shows the status of budgetary resources at the end of the period and consists of obligations incurred and the unobligated balances at the end of the period that are available for future use. For the Budgetary Schedule and the SBR, the total for this section must agree with the total for the budgetary resources section, as this section describes the status of total budgetary resources. In addition to the current-year activity, the SBR includes obligations that are unavailable except to adjust or liquidate obligations chargeable to prior period appropriations. Change in obligated balance. This section of a first-year Budgetary Schedule consists of obligations incurred in the current year, less current-year outlays. In addition to current-year activity, the SBR would also include unpaid obligations brought forward from the prior years and recoveries of prior-year unpaid obligations. Outlays. This section of the Budgetary Schedule shows the relationship between obligations and outlays (also referred to as disbursements or expenditures) and discloses payments made to liquidate obligations. Obligations are usually liquidated by means of cash payments (outlays), primarily by electronic fund transfers. This section reconciles outlays with obligations incurred and the change in obligated balances during the year. The content of this section is the same for the SBR and the Budgetary Schedule. Automated information systems are essential for modern accounting and recordkeeping. The Department of Defense (DOD) is developing its Enterprise Resource Planning (ERP) systems as the backbone of its financial management improvement, and they are critical for transforming its business operations. Implementation of ERP systems is critical to ensuring that the department meets its statutory requirement to prepare and submit audited department-wide financial statements for fiscal year 2018. However, ERP implementation has been delayed because of deficiencies in functional capability and the need for remedial corrective actions, which may affect DOD’s ability to achieve audit readiness. According to the May 2016 Financial Improvement and Audit Readiness (FIAR) Plan Status Report, while DOD continues to make progress in addressing information technology system audit readiness challenges, many of these challenges will still exist for fiscal year 2018, which is when DOD is required to undergo a financial statement audit. According to DOD officials, for the ERP systems that will not be fully deployed prior to the financial statement audit readiness milestone, the DOD components will need to identify effective work-around processes or modifications to legacy systems that will enable audit readiness. Without fully deployed ERPs, the department will be challenged to produce reliable financial data and auditable financial statements without resorting to labor-intensive efforts, such as data calls or manual work-arounds, or to provide reliable financial data on a recurring basis. The department’s ability to improve its accounting, including accounting for budgetary information, has historically been hindered by its reliance on fundamentally flawed financial management systems and processes. Effective information systems help provide reasonable assurance to management and auditors that data reported in the financial statements are accurate and complete. When information system controls are effective, auditors can rely on these controls; when information system controls are not effective, auditors must perform substantially more testing. DOD has identified its information system control deficiencies as an impediment to its components being able to demonstrate audit readiness or successfully completing an audit because under such conditions, neither management nor auditors can rely on automated application controls or system-generated reports. In addition, without adequate financial management processes, systems, and controls, the military services are at risk of having inaccurate and incomplete data for financial reporting and management decision making and potentially exceeding authorized spending limits. The lack of effective internal controls hinders management’s ability to provide reasonable assurance that allocated resources are used effectively, properly, and in compliance with budget and appropriations law. The complexities inherent in DOD reporting entity and service provider relationships and associated audit readiness interdependencies make it essential that DOD establish a common, detailed, written understanding regarding the mutual roles and responsibilities of the reporting entity and the service provider. To help ensure successful completion of audit readiness tasks, the reporting entity and service provider must agree on the roles and responsibilities for the authorization, initiation, processing, recording, and reporting of transactions; information technology controls affected by the service provider; or both. The FIAR Guidance points out that a shared understanding and agreement between the service provider and the reporting entity on these roles and responsibilities must be documented in a service-level agreement or memorandum of understanding. Details such as the types of supporting documentation that should be retained for each business process and transaction type, which organization will retain the specific documents, and the retention period for the documentation should be included in the service-level agreement/memorandum of understanding. In addition, the service provider must provide a description of the internal controls that may affect the reporting entity’s financial reporting objectives. Areas in which service providers play a critical role on behalf of DOD that continue to pose significant risks to achieving full audit readiness include the ability to support journal vouchers and the existence, completeness, and valuation of assets reported on the Balance Sheet. To facilitate progress in such critical areas, the department has developed a list of critical capabilities along with interim milestone dates by which those milestones must be completed and the critical capabilities must be resolved department-wide. In addition to the critical capabilities, the department identified DOD-wide issues, including service provider processes and systems that affect customer audit readiness and the timing of service provider audit readiness activity. As discussed in DOD’s FIAR Guidance, service providers working with reporting entities are responsible for audit readiness efforts surrounding service provider systems and data, processes and controls, and supporting documentation that have a direct effect on reporting entities’ auditability. The FIAR Guidance requires the service providers to have their control activities and supporting documentation examined by the DOD Office of Inspector General or an independent auditor in accordance with Statement on Standards for Attestation Engagements No.16 so that components have a basis for relying on the service provider’s internal controls for their financial statement audits. Service providers are subject to separate examination engagements on the service organization’s systems and the suitability of the design and operating effectiveness of the service organization’s controls to achieve stated control objectives for various business processes. Service providers design processes and related controls with the assumption that complementary user entity controls will be placed in operation by user entities. The application of these controls by user entities is necessary to achieve certain control objectives within the service organization reports. In addition to the contact named above, the following individuals made key contributions to this report: Arkelga Braxton (Assistant Director), Beatrice Alff, Tulsi Bhojwani, Gloria Cano, Francine DelVecchio, Doreen Eng, Peter Grinnell, Kristi Karls, Jason Kelly, Richard Larsen, Yvonne Moss, Chanetta Reed, Sabrina Rivera, Althea Sprosta, Roger Stoltz, Randy Voorhees, and Doris Yanger.
|
DOD remains on GAO's High-Risk List because of its long-standing financial management deficiencies. These deficiencies negatively affect DOD's audit readiness and its ability to make sound mission and operational decisions. The Army, Navy, and Air Force underwent their first Budgetary Schedule audits for fiscal year 2015. This report, developed in connection with fulfilling GAO's mandate to audit the U.S. government's consolidated financial statements, examines (1) the results of the audits of the fiscal year 2015 Budgetary Schedules for the Army, Navy, and Air Force; (2) the extent to which each military service designed a process to address financial management-related audit findings and recommendations; and (3) the extent to which DOD has designed a department-wide process to monitor and report on audit readiness remediation efforts. GAO reviewed IPA reports and documentation from the military services and DOD Comptroller and interviewed cognizant officials. The Schedules of Budgetary Activity (Budgetary Schedules) for the Army, Navy, and Air Force for fiscal year 2015 reflected current year budget activity as an interim step toward producing an auditable Statement of Budgetary Resources that will reflect multiyear budget activity. All three of the independent public accountants (IPA) contracted to audit these fiscal year 2015 Budgetary Schedules issued disclaimers, meaning that the IPAs were unable to express an opinion because of a lack of sufficient evidence to support the amounts presented. The IPAs for all three military services also identified material weaknesses in internal control and collectively issued a total of over 700 findings and recommendations. These weaknesses included, among other things, the military services' inability to reasonably assure that the Budgetary Schedules reflected all of the relevant financial transactions that occurred and that documentation was available to support such transactions. Army, Navy, and Air Force management generally concurred with these findings and stated that they would develop and implement corrective actions to address the IPAs' recommendations. Office of Management and Budget (OMB) guidance and the Department of Defense's (DOD) Financial Improvement and Audit Readiness (FIAR) Guidance include the following steps for addressing these and other financial management-related findings and recommendations reported by external auditors: (1) identify and track them, (2) prioritize them, (3) develop corrective action plans (CAP) to remediate them, and (4) monitor the implementation status of the CAPs. GAO found that the remediation processes designed by each military service had deficiencies in one or more of these areas. For example, each military service's policies and procedures lacked sufficient controls to reasonably assure that they identified and tracked the complete universe of open findings and recommendations related to financial management. Without identifying and tracking the complete universe of unresolved deficiencies, the military services cannot provide reasonable assurance that the deficiencies will be addressed in a timely manner, which can ultimately affect the reliability of financial information and the auditability of their financial statements. The DOD Comptroller has established several elements of a department-wide audit readiness remediation process, but it does not have comprehensive information on the status of CAPs throughout the department needed to fully monitor and report on the progress being made to resolve financial management-related deficiencies. Specifically, (1) the DOD Comptroller does not obtain complete, detailed information on all CAPs from the military services related to the department's critical capabilities to be able to fully assess progress and (2) reports to external stakeholders such as the Congress on the status of audit readiness do not provide comprehensive information. A lack of comprehensive information on the CAPs limits the ability of DOD and Congress to evaluate DOD's progress toward achieving audit readiness, especially given the short amount of time remaining before the statutory deadline to submit to Congress the results of an audit of the department-wide financial statements for fiscal year 2018. GAO is making a total of eight recommendations to the Army, the Navy, the Air Force, and DOD to improve processes for tracking and monitoring financial management-related audit findings and recommendations. The military services concurred with the five recommendations to them, while DOD concurred with one and partially concurred with two of the recommendations directed to it. GAO continues to believe that the recommendations are valid, as discussed in the report.
|
Error! No text of specified style in document. to the Congress and the administration. Without reliable financial information, government leaders do not have the full facts necessary to make investments of scarce resources or direct programs. Creating a government that runs more efficiently and effectively has been a public concern for decades. Over the past 10 years, dramatic changes have occurred in federal financial management in response to the most comprehensive management reform legislation of the past 40 years. The combination of reforms ushered in by (1) the Chief Financial Officers (CFO) Act of 1990, (2) the Government Management Reform Act of 1994, (3) the Federal Financial Management Improvement Act (FFMIA) of 1996, (4) the Government Performance and Results Act (GPRA) of 1993, and (5) the Clinger-Cohen Act of 1996 will, if successfully implemented, provide the necessary foundation to run an effective, results-oriented government. Efforts to continue to build the foundation for generating accurate financial information through lasting financial management reform are essential. Only by generating reliable and useful information can the government ensure adequate accountability to taxpayers, manage for results, and help decisionmakers make timely, well-informed judgments. Education’s fiscal year 1999 audit was conducted by Ernst & Young LLP, independent auditors contracted for by the Education Inspector General. We reviewed the independent auditors’ reports and workpapers. We also interviewed Education officials to obtain the status of corrective actions and reviewed available corrective action plans. We shared a draft of this statement with Education officials, who provided technical comments. We have incorporated their comments where appropriate. Our work was conducted in accordance with generally accepted government auditing standards. The Office of Management and Budget’s (OMB) implementation guidance for audited financial statements requires the 24 CFO Act agencies to receive three reports from their auditors annually: (1) an opinion or report on the agencies’ financial statements, (2) a report on the agencies’ internal controls, and (3) a report on the agencies’ compliance with laws and regulations. As of August 2000, 15 of the 24 CFO Act agencies had received “clean” or unqualified opinions on their fiscal year 1999 financial statements. The Department of Education did not receive such an opinion because of its financial management weaknesses. Error! No text of specified style in document. While Education’s financial staff and its contractors worked very hard to prepare Education’s fiscal year 1999 financial statements before the March 1, 2000, deadline, and the auditors’ opinion on the financial statements improved over that of fiscal year 1998, serious internal control and financial management systems weaknesses continued to plague the agency. For fiscal year 1999, Education made significant efforts to work around these weaknesses and produce financial statements. These efforts enabled its auditors to issue qualified opinionson four of its five required financial statements and a disclaimer on the fifth statement. Its auditors’ qualified opinion states that except for the effect of the matters to which the qualification relates, the financial statements present fairly, in all material respects, financial position, net costs, changes in net position, and budgetary resources in conformity with generally accepted accounting principles. The auditors stated the following reasons or matters for their qualification: The Department had significant systems weaknesses during fiscal year 1999 affecting its financial management systems. The new accounting system, implemented in fiscal year 1998, had several limitations, including an inability to perform a year-end closing process or produce automated consolidated financial statements. Through its efforts and those of its contractors, Education was able to partially compensate for, but did not correct, certain aspects of the material weaknesses in its financial reporting process. In addition, during fiscal year 1999, Education experienced significant turnover of financial management staff, which also contributed to the overall weakness in financial reporting. Education was unable to provide adequate support for about $800 million reported in the September 30, 1999, net position balance in its financial statements, and the auditors were unable to perform other audit procedures to satisfy themselves that this amount was correct. Education processed many transactions from prior fiscal years as fiscal year 1999 transactions and manually adjusted its records in an effort to reflect the transactions in the proper period; however, the auditors could not determine if these adjustments for certain costs and obligations were correct. Error! No text of specified style in document. The auditors were unable to determine whether beginning balances for accounts payable and related accruals were accurate. In addition, the auditors did not issue an opinion (referred to as a disclaimer of an opinion) on the Department’s Statement of Financing. The Statement of Financing provides a reconciliation or “translation” from the budget to the financial statements. The statement is intended to help those who work with the budget to understand the financial statements and the cost information they provide. The auditors stated that the reason for this disclaimer was that the Department did not perform adequate reconciliations and present support for amounts on the Statement of Financing in a timely manner. To the extent that Education was able to improve the opinion it received on its financial statements for fiscal year 1999, it was generally the result of (1) time-consuming manual procedures, (2) various automated tools to “work around” the system’s inability to close the books and generate financial statements, and (3) reliance on external consultants to assist in the preparation of additional reconciliations and the financial statements. This approach does not produce the timely and reliable financial and performance information Education needs for decision-making on an ongoing basis, which is the desired result of the CFO Act. The Department also receives annually from its auditors a report on internal controls. This report is significant for highlighting the agency’s internal control weaknesses that increase its risk of mismanagement that can sometimes result in waste, fraud, and abuse. In this report for fiscal year 1999, the Department’s auditors reported four materialinternal control weaknesses—three continuing from fiscal year 1998 and one additional one for fiscal year 1999—and that long-standing internal control weaknesses persist. Error! No text of specified style in document. financial reporting process, (2) inadequate reconciliations of financial accounting records, and (3) inadequate controls over information systems. The independent auditors also identified a new material internal control weakness related to accounting for certain loan transactions. Summaries of the material internal control weaknesses follow: Education did not have adequate internal controls over its financial reporting process. Its general ledger system was not able to perform an automated year-end closing process and directly produce consolidated financial statements as would normally be expected from such systems. Because of these weaknesses, Education had to resort to a costly, labor- intensive, and time-consuming process involving manual and automated procedures to prepare financial statements for fiscal year 1999. In addition, Education had to rely heavily on contractor services to help perform reconciliations among the various data sources used. In one instance, Education reported a balance of approximately $7.5 billion for its cumulative results of operations. However, the majority of this amount, which pertains to the Federal Family Education Loan Program (FFELP), should have been reported as a payable to Treasury rather than as cumulative results of operations. As a result of the independent auditors’ work, an adjustment was made to reclassify the $7.5 billion to the proper account. When such errors occur and are not detected by the Department’s controls, there are increased risks that the Department could retain funds inappropriately that should be returned to Treasury. Error! No text of specified style in document. accounting records. In some instances, Education adjusted its general ledger to reflect the balance per the subsidiary records, without sufficiently researching the cause for differences. Also, as indicated in prior audits, Education has not been able to identify and resolve differences between its accounting records and cash transactions reported by the Treasury. For example, for fiscal year 1999, Education adjusted its Fund Balance with Treasury, due to a difference between its general ledger and the Treasury, by a net amount of about $244 million. Reconciling agencies’ accounting records with relevant Treasury records is required by Treasury policy and is analogous to individuals reconciling their checkbooks to monthly bank statements. In another instance, as we recently reported,Education used its grantback account as a suspense account beginning in 1993 for hundreds of millions of dollars of activity related to grant reconciliation efforts. We also reported that Education did not maintain adequate detailed records for certain grantback account activity by the applicable fiscal year and appropriation. In addition, Education used the grantback account to clear unreconciled differences in various grant appropriation fund balance accounts and adjust certain appropriation fund balances to ensure that they did not become negative. In response to the auditors’ findings, Education has purchased a software tool to help enhance its ability to reconcile its account balances with the corresponding Treasury account balances on a monthly basis. Education has also developed web-based policies and procedures for reconciling the Department’s material accounts and programs. Error! No text of specified style in document. Federal Credit Reform Act of 1990, Education could not be assured that its financial or budgetary reports were accurate. Education returned the $2.7 billion to the Treasury in February 2000. The Department also established policies and procedures to ensure compliance with the Credit Reform Act. Education had information systems control deficiencies in (1) implementing user management controls, such as procedures for requesting, authorizing, and revalidating access to computing resources, (2) monitoring and reviewing access to sensitive computer resources, (3) documenting the approach and methodology for the design and maintenance of its information technology architecture, and (4) developing and testing a comprehensive disaster recovery plan to ensure the continuity of critical system operations in the event of disaster. The Department places significant reliance on its financial management systems to perform basic functions, such as making payments to grantees and maintaining budget controls. Consequently, continued weaknesses in information systems controls increase the risk of unauthorized access or disruption in services and make Education’s sensitive grant and loan data vulnerable to inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, which could occur without being detected. According to Education officials, the Department has developed and implemented a formal approach and methodology for designing and maintaining an entity-wide security program technology architecture and has updated its security policies and procedures for its financial management system to ensure that changing system security needs are reflected, access authorizations are documented, and access rights are revalidated periodically. The Department has developed a disaster recovery plan for Education’s Central Automated Processing System (EDCAPS), the accounting system implemented by the Department in fiscal year 1998. Error! No text of specified style in document. controls. Consequently, these agencies rely on costly, time-consuming ad hoc procedures to determine year-end balances. This approach does not produce the timely and reliable financial and performance information needed for decision-making on an ongoing basis. This approach is also inherently incapable of addressing the underlying financial management and operational issues that adversely affect these agencies’ ability to fulfill their missions. Error! No text of specified style in document. maximizing the value and assessing and managing the risks of its IT investments. Education officials told us that the Department has since established an Investment Review Board that assesses information technology investments. The Department did not transfer its excess funds related to FFELP, specifically the $2.7 billion of net collections previously mentioned, to Treasury as required by the Federal Credit Reform Act of 1990. Education officials stated that they believe the noncompliance with the Credit Reform Act issue will be resolved for the fiscal year 2000 audit because they have developed and implemented policies and procedures to respond to this issue. Error! No text of specified style in document. timely basis. This is significant because the volume of grant transactions is over $30 billion per year. In response to this issue, Education has developed policies and procedures to reconcile grant expenditures to the general ledger. The information systems weaknesses highlight some of the computer security vulnerabilities, such as the lack of an effective process to monitor security violations on all critical systems of the Department. Information systems control weaknesses increase the risk of unauthorized access or disruption in services and make Education’s sensitive grant and loan data vulnerable to inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, which could occur without being detected. A report issued by the Department’s Inspector General in Februaryemphasizes the need for the Department to focus on addressing its computer security vulnerabilities. In addition, earlier this year, the White House recognized the importance of strengthening the nation’s defenses against threats to public and private sector information systems that are critical to the country’s economic and social welfare when it issued its NationalPlanforInformationSystemsProtection.In the aftermath of the recent attack by the “ILOVEYOU” virus, which disrupted operations at large corporations, governments, and media organizations worldwide, we recently testifiedabout the need for federal agencies to promptly implement a comprehensive set of security controls. We also recently reportedon the results of information security audits at federal agencies that show that federal computer systems are riddled with weaknesses that continue to put critical operations and assets at risk. These types of concerns led us, in 1997 and 1999 reports to the Congress, to identify information security as a high-risk issue. Error! No text of specified style in document. In response to the IG’s February report, Education’s Chief Information Officer has developed a corrective action plan to address these weaknesses. This plan proposes developing security plans for the 6 mission-critical systems that did not have them. Education envisions that the plans will meet the requirements of OMB Circular A-130 and the Computer Security Act of 1987. The plan also calls for establishing requirements for security training and a monitoring process to ensure that security personnel receive adequate training. We did not evaluate the effectiveness of these corrective actions. The auditors reported that Education had not taken a complete, comprehensive physical inventory of property and equipment for at least the past 2 years. Comprehensive inventories improve accountability for safeguarding the government’s assets, such as computer software and hardware, and establish accurate property records. Without such an inventory, property or equipment could be stolen or lost without detection or resources could be wasted by purchasing duplicate equipment already on hand. An alleged equipment theft is currently under investigation by the OIG and the Department of Justice. In addition, vulnerabilities in the Department’s student financial assistance programs have led us since 1990 to designate this a high-risk areafor waste, fraud, abuse, and mismanagement. As we reported in our high-risk series update in January 1999, our audits as well as those by the Department’s IG have found instances in which students fraudulently obtained grants and loans. In response to your request, we are auditing selected Education accounts that are deemed particularly susceptible to improper payments based on a risk assessment that takes into account previous findings by GAO, the IG, and Education’s independent public accountants. This work is in the initial planning phase and is expected to focus primarily on the Department’s disbursement processes and EDP controls. We plan on using various electronic auditing techniques to determine whether the Department has inappropriately disbursed funds. We also plan to evaluate the vulnerability of the Department’s EDP systems to fraud, misuse, and disruption. Error! No text of specified style in document. well-informed judgments. While Education has planned and begun implementing many actions to resolve its financial management problems, it is too early to tell whether they will be successful. It is critical that Education rise to the challenges posed by its financial management weaknesses because its success in achieving all aspects of its strategic objectives depends in part upon reliable financial management information and effective internal controls. It is also important to recognize that several of the financial management issues that have been raised in reports emanating from reviews of Education’s financial statements directly or indirectly affect Education’s ability to meet its obligations to its loan and grant recipients and responsibilities under law. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. For information about this statement, please contact Gloria Jarmon at (202) 512-4476 or at [email protected]. Individuals making key contributions to this statement included Dan Blair, Anh Dang, Cheryl Driscoll, and Meg Mills. (916379)
|
Pursuant to a congressional request, GAO discussed financial management at the Department of Education, focusing on: (1) Eduction's current financial management status as evidenced by its fiscal year (FY) 1999 financial audit results and the corrective actions it has taken to resolve weaknesses identified in that audit; and (2) the relationship between the audit findings and the potential for waste, fraud, and abuse. GAO noted that: (1) Eduction's financial activity is important to the federal government because Education is the primary agency responsible for overseeing the more than $75 billion annual federal investment in support of educational programs for U.S. citizens and eligible noncitizens; (2) Education is also responsible for collecting about $175 billion owed by students; (3) in FY 1999, more than 8.1 million students received over $53 billion in federal student financial aid through programs administered by Education; (4) Education's stewardship over these assets has been under question as the agency has experienced persistent financial management weaknesses; (5) beginning with its first agencywide financial audit effort in FY 1995, Education's auditors have each year reported largely the same serious internal control weaknesses, which have affected Education's ability to provide reliable financial information to decision makers both inside and outside the agency; (6) to the extent that Education was able to improve on its financial statements for FY 1999, it was generally the result of: (a) time-consuming manual procedures; (b) various automated tools to "work around" the system's inability to close the books and generate financial statements; and (c) reliance on external consultants to assist in the preparation of additional reconciliations and the financial statements; (7) this approach does not produce the timely and reliable financial and performance information Education needs for decision making on an ongoing basis, which is the desired result of the Chief Financial Officers Act of 1990; (8) Education continues to have serious internal control and system deficiencies that hinder its ability to achieve lasting financial management improvements; (9) the internal control weaknesses need to be addressed to reduce the potential for waste, fraud, and abuse; (10) some of the vulnerabilities identified in the audit report include weaknesses in the financial reporting process, inadequate reconciliations of financial accounting records, information systems weaknesses, and property management weaknesses; (11) in response to the Inspector General's report, Education has developed a corrective action plan to address these weaknesses; and (12) vulnerabilities in Education's student financial assistance programs have led GAO since 1990 to designate this a high-risk area for waste, fraud, abuse, and mismanagement.
|
American Samoa, the only inhabited U.S. insular area in the southern hemisphere, is located about 2,600 miles southwest of Hawaii (see fig. 1). American Samoa consists of five volcanic islands and two coral atolls, covering a land area of 76 square miles—slightly larger than Washington, D.C. The capital of American Samoa, Pago Pago, is located on the main island of Tutuila, which is mostly rugged terrain with relatively little level land. Most of American Samoa’s economic activity—primarily tuna canning and government operations—takes place on Tutuila in the Pago Pago Bay area. In late September 2009, one of American Samoa’s two tuna canneries closed operations and American Samoa also experienced an earthquake and tsunami, which caused considerable damage. American Samoa According to the latest data available, American Samoa had a population of about 63,780 in 2005. At that time, the foreign-born population (non- U.S. citizens or nationals), mostly from the independent state of Samoa, comprised approximately one-third of American Samoa’s total population (see fig. 2). The American Samoa Department of Commerce estimated that in 2008, 45 percent to 55 percent of the total population was foreign born. Unlike residents born in Guam, the Commonwealth of the Northern Mariana Islands, or the U.S. Virgin Islands, residents born and raised in American Samoa are U.S. nationals and not U.S. citizens, though they may become naturalized U.S. citizens. Like residents of these other insular areas, though, residents of American Samoa have many of the rights of citizens of the 50 states, but cannot vote in U.S. presidential elections and do not have voting representation in the final approval of legislation by the full Congress. Residents of American Samoa vote for a congressional delegate who has all congressional privileges, including a vote in committee, except a vote in the House of Representatives. American Samoa does not have an organic act that establishes the relationship between American Samoa and the United States; however, two deeds of cession were initially completed between Samoan chiefs, or matai, and the United States in 1900 and 1904 and ratified by the federal government in 1929. In these deeds, the United States pledged to promote peace and welfare, to establish a good and sound government, and to preserve the rights and property of the people. The U.S. Navy was initially responsible for federal governance of American Samoa. Then, in 1951, federal governance was transferred to the Secretary of the Interior, which continues today. The Secretary of the Interior exercises broad powers with regard to American Samoa, including “all civil, judicial, and military powers” of government in American Samoa. American Samoa has had its own constitution since 1960, and since 1983, the local American Samoa constitution may only be amended by an act of Congress. The American Samoa constitution provides for three separate branches of government— the executive, the legislative, and the judicial. Nearly 40 American Samoa departments, offices, and other entities within the executive branch of the American Samoa government provide public safety, public works, education, health, commerce, and other services. The legislature, or Fono, is comprised of 18 senators and 20 representatives. The American Samoa judiciary consists of a High Court and a District Court under the administration and supervision of the Chief Justice. In general, U.S. customs and immigration laws do not govern the customs and immigration programs in American Samoa. With respect to customs law, federal regulations define the customs territory of the United States to include the 50 states, the District of Columbia, and Puerto Rico. As a result, U.S. customs requirements applicable to the U.S. customs territory do not apply in U.S. insular areas, including American Samoa. For example, in general, goods imported into American Samoa are not inspected by federal customs officials and are not subject to federal tariffs. Although American Samoa is not considered part of the customs territory of the United States, it is not treated as a foreign country either. Rather, as a U.S. insular area, it is accorded special status with respect to goods exported from American Samoa into the customs territory of the United States. The Harmonized Tariff Schedule provides exemptions to the general rates of duties for certain goods imported into the rest of the United States from American Samoa. For example, goods grown in American Samoa or produced or manufactured in American Samoa from materials grown in American Samoa may be imported into the rest of the United States duty-free, so long as the goods do not contain foreign materials worth more than 70 percent of the goods’ total value. With respect to federal immigration law, the Immigration and Nationality Act defines the United States to include the continental United States, Alaska, Hawaii, Puerto Rico, the U.S. Virgin Islands, Guam, and the Commonwealth of the Northern Mariana Islands. As a result, U.S. immigration requirements for entering and working in the United States generally do not apply in American Samoa, and the American Samoa government, rather than the U.S. government, governs the admission of aliens to American Samoa. Because U.S. customs and immigration laws generally do not apply in American Samoa, and because of the resulting separate authorities for American Samoa and U.S. customs and immigration programs, American Samoa customs and immigration agencies and officials have little, if any, interaction with the customs or immigration programs and officials in the United States. Multiple U.S. agencies have responsibilities over customs and immigration functions in the United States and at the ports of entry— including CBP, USCIS, and the Department of State’s Bureau of Consular Affairs—but none of these entities have a presence or staff in American Samoa. There are, however, several U.S. agencies that interact with the government of American Samoa and have staff based in American Samoa. For example, the FBI has a resident office in American Samoa which, since being established in December 2005, has addressed a growing number of crimes in American Samoa, including public corruption of high- ranking government officials, fraud against the government, civil rights violations, and human trafficking. Additionally, DOI has staff in American Samoa that help issue and monitor federal grants provided to the government of American Samoa. For example, in fiscal year 2008, the American Samoa government expended approximately $114.4 million in grants from several U.S. agencies, over $15.5 million of which was provided by DOI, but based on our review of budget documents, none of those funds were used to support American Samoa’s customs or immigration programs. Individuals born in American Samoa are U.S. nationals but not citizens, unless they become naturalized U.S. citizens. Travel between American Samoa and the rest of the United States is considered travel between two U.S. border locations under Department of State regulations and, as a result, passports are not required for U.S. citizens or noncitizen nationals. U.S. nationals may travel to the rest of the United States with a government-issued photo ID and documentation establishing U.S. nationality. Although travel from American Samoa to the rest of the United States is considered domestic for purposes of passport requirements, because Honolulu, Hawaii, is the travelers’ first encounter with U.S. customs and immigration officials, all passengers from American Samoa are screened by CBP officers upon arrival at the Honolulu International Airport to establish their identity and nationality and, if they are not U.S. citizens or nationals, their admissibility to the United States. A U.S. citizen or national may satisfy CBP of his or her identity and nationality by showing a U.S.-issued passport or military ID card, a birth certificate in combination with a photo ID, or an American Samoa-issued CI. While they do not have any staff based in American Samoa, staff from the Department of State’s Bureau of Consular Affairs and USCIS interact with American Samoa residents applying for U.S. passports or naturalization, respectively. U.S. nationals who live in American Samoa may apply for U.S. passports through a Department of State-approved passport acceptance agent at the U.S. Post Office in American Samoa. Noncitizen nationals are subject to the same application requirements as U.S. citizens and must show proof of their status as U.S. nationals. The passport applications and supporting materials are sent to the Department of State’s Honolulu Passport Office for processing and adjudication. If there is suspicion of fraudulent documentation within an application package (e.g., a suspicious birth certificate) the Honolulu Passport Office determines the validity of submitted documents and requires applicants to submit additional documents until staff are satisfied as to the documents’ authenticity. According to the Department of State, in 2008, over 3,300 U.S. passports were issued to individuals who listed American Samoa as their place of birth. USCIS reported that it received between 100 and 300 applications for naturalization from noncitizen nationals from American Samoa—who may reside elsewhere in the United States—each fiscal year from 2002 through 2009. Noncitizen nationals may naturalize if they reside in any U.S. state for 3 months, pass an English and civics test, and take an oath of allegiance. They must also pay the $675 fee for naturalization. American Samoa operates its own customs and immigration programs, which have separate organizational structures and functions and are based on American Samoa laws, regulations, policies, and procedures. American Samoa’s Customs Division, within the Department of Treasury, inspects passengers, baggage, and cargo, and collects excise taxes. American Samoa’s immigration program is managed by the Immigration Office and the Immigration Board, both of which report to the Attorney General of American Samoa. The Immigration Office is responsible for alien ID issuance, daily immigration operations, and enforcement; while the Immigration Board holds weekly hearings and makes decisions on issues, such as aliens’ work authorizations and transfers of aliens’ sponsorships. In addition to these functions that pertain to processing applications from aliens who want to live or work in American Samoa, the Office of the Attorney General also has responsibility for issuing CIs for U.S. nationals, which includes U.S. citizens and noncitizen nationals, wishing to travel to the rest of the United States. Under the authority of the American Samoa Department of Treasury, the overall function of the Customs Division is to administer and enforce all excise tax laws, and to intercept illicit imports of narcotics, weapons, ammunition and other contraband at the ports of entry. It is authorized to develop policies and procedures necessary for the proper functioning of the Customs Division. American Samoa customs law provides that all persons entering or leaving American Samoa may be searched by a customs officer and all merchandise or baggage imported or brought into American Samoa is to be inspected by a customs officer. Additionally, all passengers and crew members, regardless of citizenship, must make a customs entry and declaration upon arrival in American Samoa and all items acquired abroad must be declared in writing. Any vessel arriving in American Samoa is required to provide certain documents, such as manifests, and is subject to being boarded and examined by American Samoan customs officials. All imports that arrive in American Samoa are to be taken into custody and released by the Customs Division after being inspected. According to port administration officials, approximately 1,000 vessels come through the Pago Pago seaport annually and about 50 cargo containers, on average, arrive at the seaport each day and have to be inspected prior to release by the Customs Division. American Samoa customs law states that cargo containers may be inspected on location at the official point of entry or removed to other locations, such as the respective places of business, for inspection. As such, cargo containers may not be opened by importers until officially inspected and released by the Customs Division in writing. Importers are required to pay certain excise taxes on goods and no imports are to be released until all fees and excise taxes have been paid in full. An excise tax of 5 percent is imposed on items imported for commercial use or resale in American Samoa and certain items, such as alcohol, tobacco, and motor vehicles, are taxed at higher rates prescribed by law. All monies due pursuant to excise tax laws are collected by the Customs Division and are to be deposited with the Treasurer of American Samoa. The Customs Division’s fiscal year 2009 budget was $1.2 million and it has 55 employees who work within its six branches, as shown in figure 3. The Customs Division’s collection of excise taxes generated over $18.5 million in revenue for the government of American Samoa in fiscal year 2009— 22 percent of the government’s total general fund revenues for that year. Typically, excise tax revenues are to be deposited into the general fund and available for appropriation by the American Samoa legislature, or Fono. The Customs Division maintains standard operating procedures that provide written policies and procedures that define duties, responsibilities, and privileges for Customs Division staff and also detail consequences for violating the laws of American Samoa and the written policies and procedures. Additionally, the Customs Division developed a Code of Conduct that articulates the standards of behavior and conduct required of employees in an effort to ensure that the integrity of the Customs Division is maintained. According to the Chief Customs Officer, all Customs officers rotate functions every 3 to 6 months to help prevent complacency and corruption. He added that Customs supervisors are also required to perform random inspections to ensure Customs officers are sufficiently inspecting the imported cargo containers. According to the Chief Customs Officer, because the Customs Division does not have an automated computer system for tracking cargo container arrivals and inspections, all Customs Division functions are tracked manually. He stated that it would be extremely beneficial to be able to automate the system that tracks the offloading and inspections of cargo containers and the collection of excise taxes. He added that he is in the process of determining the most suitable automated system for the American Samoa Customs Division and once he finds a system that suits their needs, he intends to submit an application for a technical assistance grant from DOI. American Samoa’s immigration program is responsible for, among other things, managing aliens arriving in American Samoa. Aliens may enter American Samoa for up to 30 days on visitor or business entry permits and certain aliens may apply to reside in American Samoa for more than 30 days based on family relationships with American Samoans (or permanent residents in American Samoa) or for employment reasons. Immigration law in American Samoa provides for 12 separate classifications of aliens to remain in American Samoa for more than 30 days; however, some of the categories are subject to numerical limits. Any aliens entering and remaining in American Samoa must have sponsors. In general, American Samoa’s immigration program has many of the same elements as the U.S. immigration program, such as numerical limits on the number of aliens that may enter, or preferences for specific categories of aliens; however, American Samoa’s immigration program also has some unique aspects, such as special allowances for aliens from the Independent State of Samoa. Additional information on American Samoa’s immigration program, including alien categories and numerical limitations, is provided in appendix I. American Samoa’s immigration program is administered by the Immigration Office and the Immigration Board, both of which are housed within the Department of Legal Affairs and are under the authority and guidance of the Attorney General of American Samoa, as shown in figure 4. The Immigration Office, and the sections that report to it, are responsible for the daily operations of immigration functions, including issuing immigration ID cards, as well as tracking and enforcing quotas on aliens. The Immigration Board has responsibility, among other things, for approving applications for alien work permits and for authorizing aliens to remain in American Samoa and register as lawfully present. The Office of the Attorney General, also within the Department of Legal Affairs, is responsible for issuing CIs for U.S. nationals who wish to travel to the rest of the United States, among other things. The American Samoa Immigration Office, in concert with the Immigration Board, administers the processes by which aliens may enter American Samoa. The Immigration Office is headed by the Chief Immigration Officer with a staff of 40 employees. In fiscal year 2009, the Immigration Office had a budget of $805,000 and generated $1.78 million in total revenue from the various application and entry fees it collected. According to Immigration Office officials, their functions include inspecting documents for all persons entering American Samoa, receiving applications for entry permits and petitions for aliens to remain in American Samoa, and enforcing immigration laws. Further details on the various functions of the Immigration Office are contained in the sections that follow. All persons entering or leaving American Samoa may be searched by one or more immigration officers and asked to provide documentation, such as a valid passport or travel document. Immigration officers have authority, under certain circumstances, to interrogate, search, and arrest certain arriving passengers. According to the American Samoa Department of Commerce, in calendar year 2008, 72,999 individuals traveled to American Samoa, of which 6,995 arrived for employment reasons. According to Immigration Office officials, every alien with a classification that allows the alien to remain in American Samoa for longer than 30 days is registered and has an alien ID card that is valid for 1 or 3 years, depending on the alien’s classification. Immigration officials explained that safeguards are in place regarding alien ID cards. For example, each alien ID card has a hologram image that is designed to make the card more difficult to counterfeit. Since 2003, the Immigration Office has used a computer system to maintain the records of lawfully present aliens and permanent residents. This computerized system is able to track the registration of aliens and the issuance of most entry permits. American Samoan government officials added that the Immigration Office has a separate computer system, funded by a 2003 grant from DOI’s Office of Insular Affairs, which maintains data on passengers who arrive and depart American Samoa via the airport or seaport through scanning or entering data from the passengers’ travel documents into the system. The officials noted, though, that the two separate computerized systems—one for tracking alien registrations and one for tracking arrivals and departures—do not have any links between them, so that the data from the immigration system that tracks the registration of aliens and issuance of entry permits are not automatically updated or matched with data on arriving and departing passengers. The officials indicated they could benefit from additional upgrades to the systems to allow them to be linked and added that they are in the early process of developing a proposal for obtaining funding from DOI for upgrading the computer systems. Investigating Violations of Immigration Law The Immigration Office staffs an Investigation Section that is responsible for investigating and charging aliens who are in violation of American Samoa immigration laws. Court officials we met with in American Samoa stated that in an effort to assist with enforcement, the American Samoa District Court requires an immigration officer to be present during criminal proceedings to verify the immigration status of defendants. According to court officials we met with, approximately 60 percent of the defendants who have appeared in District Court in recent months for a variety of criminal offenses are aliens. The American Samoa Immigration Board hears cases and determines, for example, if aliens are authorized to remain in American Samoa and register as lawfully present aliens. The board, consisting of five members who are appointed by the Governor with consent and approval of the legislature, is overseen by the Attorney General. A board member can be appointed no more than twice, and the appointment term is 5 years. The Immigration Board holds hearings once a week and hears cases, such as requests for work authorizations and transfers of sponsorship. Immigration Board members refer cases to the Attorney General for review of its decisions if the Attorney General directs the board to do so or if the chair or the majority of the board believes the case should be referred. In addition to reviews of board decisions by the Attorney General, the High Court of American Samoa has appellate jurisdiction over decisions of the Immigration Board. The Office of the Attorney General is responsible for issuing CIs for U.S. nationals who wish to travel to the rest of the United States, among other things. Officials from the Office of the Attorney General told us that, historically, the CI was intended to be used in the event of an emergency, such as medical treatment needed that could only be obtained off-island, or the unexpected death of a family member in the United States, and the traveler could not obtain a passport in time to travel. However, because the CI is easier and faster to obtain by people in American Samoa, it has become more convenient to use CIs for travel than obtaining a U.S. passport. An authorizing official within the Office of the Attorney General stated that they encourage people to apply for a U.S. passport because, while it costs more up front, it is valid for 10 years compared to the CIs, which are valid for 6 months. The official noted, though, that the CIs have been a source of revenue to the government that would be lost if the CIs were entirely replaced by passports. For example, the Office of the Attorney General provided us data that showed that for fiscal year 2009, CIs generated over $350,000 for the government of American Samoa from fees associated with issuing over 7,100 CIs. According to officials from the Office of the Attorney General, in order to apply for a CI, a U.S. national must fill out the application and provide documentation of U.S. nationality, such as his or her birth certificate as issued from the American Samoa Vital Statistics Office; a form of government-issued photo ID, such as a voter ID, driver’s license, military card, or expired passport; and submit a passport-sized photo and the $50 processing fee. Additionally, if an individual is under the age of 18, the parents or legal guardians are required to provide their own ID, and an ID for the child, such as a school ID with a photo. If individuals change their names due to a marriage or divorce, official documentation is required. As shown in figure 5, once all application materials are complete, officials within the Office of the Attorney General explained that they instruct each applicant to pay the $50 fee to the cashier in the Immigration Office. The applicant is provided a receipt that shows proof the $50 fee was paid and the receipt includes the CI application number so the payment can be tied to the correct CI application. The applicant then brings the receipt, the completed CI application, and supporting materials to the CI Office outside the Office of the Attorney General to be processed by the staff. According to the Deputy Attorney General, applications for CIs are generally processed and issued in 1 to 2 business days. In comparison, it takes about 4 to 6 weeks for the Department of State to process and issue a U.S. passport. According to staff of the Office of the Attorney General, there is no computerized system to track CI applications or issuance and all records maintained are paper-based and manually filed. certificte, photo ID, nd passport-ized complete? Prepre CI pcket for pprovl. Typemp with officil. The letter-sized CI document features an attached photo of the individual, two circular “Attorney General, Govt. of American Samoa” stamps, as well as a unique CI identification number featured in the upper left-hand corner, as depicted in figure 6. There are three individuals within the Office of the Attorney General with authorization to stamp CIs. Both the stamped seals and the ID numbers on the CI are in red ink. The CIs are signed—generally by the Attorney General or Deputy Attorney General— for approval prior to being provided to the applicants. U.S. and American Samoa agencies report that American Samoa’s operations of its customs and immigration programs may pose risks to American Samoa and the rest of the United States, but no U.S. agency has performed a risk assessment. According to U.S. and American Samoan government officials we met with, potential risks from the customs program’s operations primarily affect American Samoa, whereas potential risks related to the immigration program’s operations affect both American Samoa and the rest of the United States. According to these officials, potential risks to the government of American Samoa from its customs operations include lost revenues and the possible aiding of criminal activities based on allegations of inadequate enforcement. Regarding American Samoa’s immigration program, U.S. and American Samoan government officials stated their principal concerns are that current enforcement practices may lead to (1) exploitation of aliens by sponsors, (2) incidents of human trafficking, (3) overstays by aliens, and (4) exceeding numerical limits of aliens. In contrast, the potential risks identified by U.S. officials for the rest of the United States are more limited. In particular, U.S. officials identified little to no potential risks to the rest of the United States based on American Samoa’s customs operations. According to U.S. officials we met with, the potential risk to the rest of the United States from American Samoa’s immigration operations is illegal immigration into the rest of the United States as a result of travelers fraudulently obtaining documentation, such as a CI, in American Samoa. However, U.S. officials we met with, including CBP, acknowledged that they do not know the magnitude of fraudulently issued CIs or the potential threat and consequences to the United States as a result of fraudulently issued CIs because no assessment has been performed of the risks posed by the continued use of the CIs as identity and nationality documents for U.S. nationals. A threat assessment issued by the government of American Samoa in December 2005 reported that inadequate enforcement of customs laws has led to incidents related to insufficient container inspections; allegations of Customs officers accepting bribes; and the smuggling of drugs, firearms, and other illegal contraband. American Samoan law enforcement officials we met with told us these same concerns still exist. The Customs Division’s Code of Ethics and Conduct states that, “it is the duty of the customs officer to enforce the laws of American Samoa and that a failure to properly conduct inspections of merchandise is a violation of the Standard Operating Procedures and Policies and will result in disciplinary action.” According to the Chief Customs Officer, since June 2002, a total of six officers have been removed from duty for violations associated with corruption, misconduct, or drug or alcohol use. Another effect of inadequate enforcement of customs laws is that there is the potential for lost revenues for the government if proper excise taxes are not collected on goods imported in each imported cargo container. While the Customs Division has written policies and procedures, and also has certain internal controls in place—such as rotating staffs’ responsibilities so that the same customs officers do not always inspect containers at the same businesses—there is no automated computer system within the Customs Division to track cargo manifests, container inspections, verification of deposits, parcel taxes at the post office, or interisland ferry excise taxes. The Chief Customs Officer recognizes this is a weakness and told us that an automated computer system would help the division track cargo container inspections and undervalued or undeclared merchandise and contraband, and that he is working to obtain a computerized system that would be appropriate for American Samoa’s volume of container traffic. According to federal officials from DOI and the FBI, as well as American Samoan government officials we met with, current enforcement practices of immigration laws have led to a variety of concerns, including the exploitation of aliens by sponsors, incidents of human trafficking, alien overstays, and exceeding numerical limits on aliens. Additionally, American Samoa government’s 2005 threat assessment reported a lack of management and oversight control by immigration officials with regard to the enforcement of policies and procedures. As of March 31, 2010, the Immigration Office reported 20,282 aliens in American Samoa for employment reasons, which is about 58 percent of the estimated 34,874 aliens in American Samoa. In general, aliens entering and remaining in American Samoa, including those who enter for employment reasons, must have a sponsor. According to FBI and American Samoa government officials, there are instances in which employment sponsors exploit aliens under the threat of revoking their sponsorship and having them deported. As reported in the 2005 threat assessment, a number of immigrants from Taiwan, China, and the Philippines, were pursuing employment opportunities and allowed to enter American Samoa under the sponsorship of owners of a variety of businesses, and were forced into servitude or prostitution once they arrived (i.e. human trafficking). Among the most notable cases of human trafficking in American Samoa is U.S. v. Lee, which was one of the largest human trafficking cases ever prosecuted by the U.S. Department of Justice. This 2001 case involved about 200 Chinese and Vietnamese victims who were recruited to work in an American Samoa garment factory. In 2003, Lee was convicted in the U.S. District Court of Hawaii of involuntary servitude, conspiring to violate civil rights, extortion, and money laundering. The 2005 threat assessment reported that the full dimension of the problem of human trafficking in American Samoa is difficult to measure, but noted that intelligence showed there continue to be victims of human trafficking in American Samoa and that human trafficking is a major source of profit for organized crime syndicates. According to the threat assessment, human trafficking is a difficult issue for local law enforcement because there is no American Samoa human trafficking law or avenue for prosecution locally. The legislature in American Samoa is considering legislation to criminalize human trafficking and categorize it as a felony in American Samoa law but, to date, no such legislation has been enacted. According to FBI and American Samoan government officials, there are also instances in which numerical limits on aliens are not adequately enforced and aliens overstay their visits. Immigration law in American Samoa allows for 12 separate classifications of aliens to reside in American Samoa for more than 30 days; however, some of the categories are subject to numerical limits. The extent of this issue is unknown, as the Immigration Office does not have documented policies and procedures that define how the office is to enforce the numerical limits. If an alien enters American Samoa on a 30-day entry permit and stays longer than the 30-day period, he or she is tracked in the Immigration Office database as an overstayer, according to Immigration Office officials. According to data from the Immigration Office, there were over 2,600 alien overstayers for fiscal year 2009. Immigration Office officials explained that often the overstayers expect to have their residency authorized within that 30-day time frame, but they do not understand that it takes longer. Additionally, 7,572 aliens had expired ID cards as of March 2010, which means the aliens’ ID cards had expired and had not been renewed. Immigration officials told us they track these individuals, make contact with their sponsors, and try to determine the reason they have not renewed their cards. The American Samoa legislature, in addition to its actions on human trafficking legislation, is also considering draft legislation that will make changes to existing immigration law. Some of the proposed changes include creating a Department of Immigration as its own stand-alone department, outside of the Department of Legal Affairs, and to make it subject to annual audits by the territorial auditor. The draft legislation also includes reductions to the numerical limits on aliens. Further, the standards for sponsoring an alien for employment reasons would change to require additional proof of the need for the alien workers and a written contract defining the agreement between sponsors and aliens. While these legislative efforts would appear to address some of the concerns identified by American Samoan and U.S. law enforcement officials, the legislation is not final and so it is too soon to tell what impact the legislation, if passed, will have on addressing the identified concerns. U.S. government officials we met with representing DHS, the Department of State, and the FBI stated that a potential risk to the United States associated with American Samoa administering its own customs and immigration programs is illegal immigration into the rest of the United States as a result of travelers obtaining false documentation in American Samoa. The potential for illegal entry by aliens into the rest of the United States from American Samoa raises questions as to whether current practices in American Samoa can be used by criminals and terrorists to jeopardize the security of the United States. As stated in CBP’s Fiscal Year 2009-2014 Strategic Plan, illegal immigration compromises national security as aliens unlawfully gaining entry to the United States create a pathway for illegal entry and a demand for false documentation and identities. This is a threat to national security as terrorists might exploit the same vulnerabilities that such aliens currently use. Furthermore, a January 2010 presidential memorandum stated that DHS should aggressively pursue enhanced screening technology, protocols, and procedures, especially in regard to aviation and other transportation sectors, and strengthen international partnerships and coordination on aviation security issues. According to data provided by the American Samoa Department of Legal Affairs, as shown in table 1, while a majority of U.S. nationals and citizens traveling to Hawaii from American Samoa during the past 3 years have traveled with a U.S. passport (an average of 72.2 percent), the second most commonly used document for travel by U.S. nationals and citizens was the CI (an average of 19.5 percent). Honolulu, Hawaii, is the only location within the United States to which flights from American Samoa arrive. Upon arrival in Honolulu, passengers from American Samoa are inspected by CBP to establish their identity and nationality, and if the passenger is an alien, admissibility. For U.S. citizens and noncitizen nationals, identity and nationality may be established, to the satisfaction of a CBP officer, through the use of one or more documents, such as a U.S.-issued passport or military ID, a birth certificate with photo ID, or an American Samoa-issued CI. In addition, the CBP officers use other techniques to verify a person’s identity and nationality, including asking questions and observing behavioral cues. According to CBP, once a U.S. citizen or noncitizen national has sufficiently demonstrated his or her identity and nationality to the CBP officer, he or she is no longer subject to inspection for admissibility. As with other airline passengers who arrive from outside the United States and have not been previously screened by CBP, if there is any suspicion of fraudulent documents or intent, CBP officers may investigate the travelers from American Samoa further and refer the travelers for additional, more in- depth questioning by CBP, or additional investigation by U.S. Immigration and Customs Enforcement or Department of State officers. Honolulu- based CBP officers we spoke with who screen passengers arriving from American Samoa could not identify any incidents in recent years in which a passenger from American Samoa was determined to be traveling with a fraudulent CI, or of a passenger being referred for additional inspection because of a suspected fraudulent CI. The CBP officers also stated that they have not identified any counterfeit U.S. passports used by passengers arriving from American Samoa. However, while a CI or U.S. passport may appear legitimate, it could have been improperly obtained through the use of fraudulent identity documents, such as a false birth certificate or driver’s license, as reported to us by American Samoan and U.S. law enforcement officials, and such instances would be difficult for CBP to detect. In November 2009, charges were filed in the High Court of American Samoa against the Manager of the Office of Motor Vehicles for conspiracy to commit forgery based on evidence that driver’s licenses had been issued fraudulently. Additionally, the American Samoa Department of Homeland Security and Office of Independent Prosecutor, with assistance from the resident FBI agent, initiated a full-scale investigation centered on alleged improprieties in the Office of the Attorney General and the Immigration Office. In January 2010, the Office of the Attorney General and the Immigration Office were served search warrants to provide investigators with records related to certain aliens in American Samoa, including all ledgers related to CIs. While CBP has no direct knowledge of problems regarding the use of CIs or passports that may have been obtained using fraudulent documents, the Department of State has recently changed its passport adjudication procedures as a result of these allegations. Department of State officials we spoke with told us that the potential for fraudulently obtaining CIs appeared to be a vulnerability within the passport application process and, as a result, they no longer accept CIs as the only form of identification to support a U.S. passport application. Applicants may submit their locally-issued American Samoa identification, but they will also need to provide additional documentation. Additionally, the Department of State no longer allows the Office of the Attorney General to serve as a location for accepting passport applications. Rather, applications for U.S. passports are only accepted through passport acceptance agents at the U.S. Post Office in Pago Pago. Moreover, as a result of evidence from the Independent Prosecutor’s investigation, the Department of State has undertaken an investigation to determine whether passports were issued to individuals living in American Samoa in recent years who are neither U.S. citizens nor noncitizen nationals. According to State Department officials, while this investigation will serve to enhance the security of the process for obtaining U.S. passports, it will not address the reported vulnerabilities in the process for issuing CIs. CBP officials we met with in Honolulu, Hawaii, who screen passengers arriving from American Samoa, stated that instead of using the CI as a document to establish identity and nationality for U.S. nationals arriving from American Samoa, it would be easier to screen passengers and prevent fraud if there was a more secure document establishing identity and nationality for those travelers. However, U.S. agency officials we met with, including CBP, acknowledged that they do not know the magnitude of fraudulently issued CIs or the potential threat and consequences to the United States as a result of fraudulently issued CIs because no assessment has been performed of the risks posed by the continued use of CIs as an identity document to facilitate travel by U.S. nationals. The CBP officials stated that no such risk assessment has been performed because CBP does not generally initiate risk assessments of issues or programs related to areas that are considered a part of the United States, such as American Samoa, although such an assessment could help to better define and understand the potential risks. The federal government’s Internal Control Standards call for the establishment of internal controls to provide for an assessment of the risks an agency faces from both external and internal sources. Additionally, risk management plays an important role in homeland security. Federal law has charged DHS with coordinating homeland security programs through the application of a risk management framework. DHS, within its National Infrastructure Protection Plan (NIPP), established criteria for risk assessments. Risk assessments help decision makers identify and evaluate potential risks so that countermeasures can be designed and implemented to prevent or mitigate the potential effects of the risks. The NIPP characterizes risk assessment as a function of three elements (1) threat—the likelihood that a particular asset, system, or network will suffer an attack or an incident; (2) vulnerability—the likelihood that a characteristic of, or flaw in, an asset, system, or network’s design, location, security posture, process, or operation renders it susceptible to destruction, incapacitation, or exploitation by terrorist or other intentional acts, mechanical failures, and natural hazards; and (3) consequence—the negative effects on public health and safety, the economy, public confidence in institutions, and the functioning of government, both direct and indirect, which can be expected if an asset, system, or network is damaged, destroyed, or disrupted by a terrorist attack, natural disaster, or other incident. Given the concerns raised regarding allegations of fraudulently obtained documents and potential illegal immigration into the United States, performing a risk assessment could better position U.S. agencies to understand the extent of the threat, vulnerabilities, and consequences associated with travelers fraudulently obtaining CIs and using them as identity documents when coming to the rest of the United States from American Samoa. While the majority of travelers to Honolulu, Hawaii, from American Samoa in recent years have traveled with passports, and CBP has no data on known use of fraudulent CIs to travel to the rest of the United States from American Samoa, federal officials have stated that illegal immigration into the United States by aliens using CIs fraudulently obtained in American Samoa is a concern. Moreover, the Department of State is aware of additional allegations concerning aliens who may have fraudulently obtained U.S. passports and has recently begun a new investigation. According to CBP, illegal immigration compromises national security, as aliens unlawfully gaining entry create a pathway for illegal entry and a demand for false documentation and identities. However, no U.S. agency has performed a risk assessment of the documents used to establish identity and nationality by travelers coming from American Samoa and the impact, in particular, that continued use of the CI as an identification and nationality document may have on American Samoa and the rest of the United States. Such a risk assessment could better position relevant U.S. agencies to understand the extent of threats, vulnerabilities, and consequences associated with the use of CIs, and better inform decisions on which documents should continue to be used for those wishing to travel to the rest of the United States from American Samoa. To better understand the extent and significance of the possible risks associated with aliens in American Samoa fraudulently obtaining documents to travel to the rest of the United States and potentially pursue U.S. citizenship, we recommend that the Secretary of DHS, in consultation with the Secretary of the Departments of State and the Interior, perform a risk assessment to (1) determine the extent of the threats, vulnerabilities, and consequences associated with aliens fraudulently obtaining CIs and using them to travel to the rest of the United States from American Samoa: and (2) make a determination as to whether CIs should continue to be an acceptable identification document that establishes nationality for U.S. nationals wishing to travel to the rest of the United States from American Samoa. We requested comments on a draft of this report from DHS, DOI, the Department of State, and the Department of Justice, as well the American Samoa government—to include the Office of the Governor, leaders of the legislature, the Department of Treasury, and the Department of Legal Affairs. DOI and the leaders of the American Samoa legislature summarized their comments in letters, which are reprinted in appendixes II and III, respectively. DOJ notified us through e-mail that it had no comments and DHS notified us through e-mail that it concurred with the recommendation. In addition to these responses, U.S. Customs and Border Protection, the Department of State, and the American Samoa Department of Treasury’s Customs Division each provided technical comments, which have been incorporated into the report, as appropriate. The American Samoa government’s Office of the Governor and its Department of Legal Affairs did not provide comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Stephen L. Caldwell at (202) 512-8777, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix includes additional details on American Samoa’s immigration program, such as information on visitors and aliens seeking to reside in American Samoa for more than 30 days. In general, American Samoa’s immigration program has many of the same elements as the U.S. immigration program, such as numerical limits on the number of aliens who may enter, preferences for specific categories of aliens based on the grounds of their classification (e.g., family relationship or specific skills), and the ability of aliens to apply for classification as permanent residents. However, American Samoa’s immigration program also has some unique aspects, such as special allowances for aliens from the Independent State of Samoa, including higher numerical limitations and a guest worker program specifically for Samoans, the broad sponsorship requirements for aliens, and the inability of aliens to naturalize or become U.S. nationals. Visitors may enter American Samoa for tourism or business purposes for up to 30 days on a visitor or business entry permit. Such tourists or business persons must have a valid passport or other travel document and a round-trip ticket to their point of origin or onward passage to a destination beyond American Samoa. Upon approval of the Attorney General or his or her designee, aliens with 30-day permits may stay for an additional 30 days. Aliens may enter and reside in American Samoa based on their relationship with American Samoans (or permanent residents in American Samoa) or for employment reasons. Table 2 below summarizes the classifications and the major characteristic of each classification. Certain aliens may enter and reside in American Samoa based on their relationship with American Samoans or permanent residents in American Samoa. For example, individuals born outside of American Samoa, one of whose parents was born in American Samoa of Samoan ancestry, are considered American Samoan and may obtain an AA classification if those individuals register with the Immigration Board within 3 years of their 18th birthday. Such individuals must have a sponsor. Sponsors must be either an American Samoan or U.S. national who resides in American Samoa or a business licensed in American Samoa, and sponsors are responsible for the alien’s medical bills, taxes, and public debts, among other things. Aliens who are immediate relatives of American Samoans may obtain a BA classification in American Samoa. Such relatives include children and spouses of American Samoans and parents of American Samoans at least 21 years of age. Other aliens who have certain family relationships with American Samoans may request an alien classification in American Samoa. These aliens must have a sponsor, are subject to numerical limitations, and approval for a classification is granted in order of preference. See table 3 for the classifications, in order of preference, with the numerical limitation of aliens from each category per fiscal year. Aliens may also apply to come to American Samoa for employment reasons. For example, aliens who are members of the professions or persons of exceptional ability in the sciences or the arts may apply for a P4 classification, and aliens who are capable of performing skilled or unskilled labor for which there is a shortage of employable and willing people in American Samoa may apply for a P5 classification. Aliens applying for these classifications are subject to numerical limitations and category preferences, as shown in table 3. For P4 aliens, their employer serves as their sponsor, and they may apply to the Immigration Board for permission to transfer sponsorship to another employer. Aliens with the P5 classification are primarily domestic or agricultural workers, and they must reside with and engage in domestic or agricultural work for their sponsors. According to Immigration Board policy, after 1 year, they may apply to the board for authorization for outside employment. American Samoans may sponsor only one P5 alien unless they can demonstrate to the Immigration Board that more than one person is required for domestic work due to the age or infirmity of the sponsor or a member of the sponsor’s family or, with respect to agricultural workers, that the sponsor needs more than one agricultural worker and the sponsor can afford the care of the workers in all ways while they are in American Samoa. There are two employment-based classifications that are not subject to the numerical limitations. First, there is a special provision category for aliens who are employed by the American Samoa government, the United States government or who are members of skilled, professional, or specialized labor that by Immigration Board order are waived from the numerical limitations upon a showing of extenuating circumstances. Special provision aliens are sponsored by their employers and they may transfer sponsorship to another employer only with the approval of the Immigration Board. There is also a guest worker program specifically for aliens from Samoa who work for the tuna cannery in American Samoa. Guest workers do not need work authorization and do not have a separate sponsorship requirement, as their classification is tied to their employment, and the cannery is the only eligible employer. Some aliens may also apply for classification as permanent residents. In order to become a permanent resident, a person must either (1) be physically and legally present in American Samoa for a continuous period of at least 20 years and of good moral character; (2) at the time of being legally adopted by an American Samoan be 21 years of age or younger or be legally adopted by an American Samoan prior to December 31, 1980; (3) have been legally married to an American Samoan or a United States citizen and have resided in American Samoa for at least 10 years; or (4) be a brother or sister of an American Samoan or a married son or married daughter of an American Samoan and have resided in American Samoa for at least 10 years. There is a numerical limitation of 50 aliens who may be approved for classification as permanent residents based on the residency requirement alone. There is no numerical limitation for those applying for permanent resident classification based on the other three categories. In addition to the contact named above, Christopher Conrad, Assistant Director, and Amy Sheller Martin, Analyst-in-Charge, managed this review and Michele Lockhart made significant contributions to the work. Jenny Chanley assisted with design and methodology, Emil Friberg and Richard Hung provided additional technical and issue area expertise, Barbara Hills helped to develop the report graphics, Lara Kaskie provided assistance in report preparation, and Tracey King provided legal support and analysis. American Samoa and Commonwealth of the Northern Mariana Islands: Wages, Employment, Employer Actions, Earnings, and Worker Views Since Minimum Wage Increases Began. GAO-10-333. Washington, D.C.: April 8, 2010. U.S. Insular Areas: Opportunities Exist to Improve Interior’s Grant Oversight and Reduce the Potential for Mismanagement. GAO-10-347. Washington, D.C.: March 16, 2010. Poverty Determination in U.S. Insular Areas. GAO-10-240R. Washington, D.C.: November 10, 2009. Medicaid and CHIP: Opportunities Exist to Improve U.S. Insular Area Demographic Data That Could Be Used to Help Determine Federal Funding. GAO-09-558R. Washington, D.C.: June 30, 2009. Addressing Significant Vulnerabilities in the Department of State’s Passport Issuance Process. GAO-09-583R. Washington, D.C.: April 13, 2009. Transportation Security: Comprehensive Risk Assessments and Stronger Internal Controls Needed to Help Inform TSA Resource Allocation. GAO-09-492. Washington, D.C.: March 27, 2009. Department of State: Undercover Tests Reveal Significant Vulnerabilities in State’s Passport Issuance Process. GAO-09-447. Washington, D.C.: March 13, 2009. American Samoa: Issues Associated with Some Federal Court Options. GAO-08-1124T. Washington, D.C.: September 18, 2008. Commonwealth of the Northern Mariana Islands: Managing Potential Economic Impact of Applying U.S. Immigration Law Requires Coordinated Federal Decisions and Additional Data. GAO-08-791. Washington, D.C.: August 4, 2008. American Samoa: Issues Associated with Potential Changes to the Current System for Adjudicating Matters of Federal Law. GAO-08-655. Washington, D.C.: June 27, 2008. Border Security: Security of New Passports and Visas Enhanced, but More Needs to Be Done to Prevent Their Fraudulent Use. GAO-07-1006. Washington, D.C.: July 31, 2007. U.S. Insular Areas: Economic, Fiscal, and Financial Accountability Challenges. GAO-07-119. Washington, D.C.: December 12, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. U.S. Insular Areas: Multiple Factors Affect Federal Health Care Funding. GAO-06-75. Washington, D.C.: October 14, 2005. American Samoa: Accountability for Key Federal Grants Needs Improvement. GAO-05-41.Washington, D.C.: December 17, 2004.
|
American Samoa is a U.S. insular area that operates its customs and immigration programs according to its own laws and independent of the United States. As such, U.S. agencies, such as U.S. Customs and Border Protection, have no roles in operating the customs or immigration programs in American Samoa. U.S. officials have raised questions about how American Samoa operates its customs and immigration programs, and if this introduces any risks to the security of American Samoa or the rest of the United States. GAO was asked to review American Samoa's customs and immigration programs and this report discusses (1) the operations of American Samoa's customs and immigration programs, and (2) the extent to which U.S. and American Samoa agencies have identified potential risks in American Samoa's customs and immigration programs. GAO reviewed available statutes, regulations, policies, and procedures governing American Samoa and U.S. customs and immigration programs. GAO also visited American Samoa and interviewed U.S. and American Samoan officials to obtain insights. American Samoa operates its own customs and immigration programs, which have separate organizational structures and functions and are based on local laws, regulations, policies, and procedures. Its Customs Division, within the American Samoa Department of Treasury, inspects passengers, baggage, and cargo, and collects excise taxes. The immigration program is administered by the Immigration Office and the Immigration Board, which both report to the American Samoa Attorney General. The Immigration Office is responsible for document issuance, operations, and enforcement, while the Immigration Board holds hearings to decide on issues such as alien work authorization. The Office of the Attorney General is responsible for, among other things, issuing Certificates of Identity (CI), which American Samoans may use to demonstrate their nationality when traveling to the rest of the United States. American Samoa and U.S. government agencies report that American Samoa's operations of its customs and immigration programs may pose risks to American Samoa and the rest of the United States, but U.S. agencies have not conducted a risk assessment. Regarding customs, potential risks to American Samoa are lost revenues and the possible aiding of criminal activities. While the Customs Division has written policies and procedures to govern duties and responsibilities, American Samoa and U.S. law enforcement officials are concerned that American Samoa Customs officials have accepted bribes for improperly inspecting containers, which could result in lost tax revenues. American Samoan and U.S. officials have identified no concerns to the rest of the United States from American Samoa's operations of its customs program. Regarding immigration, the principal concern to American Samoa is that current enforcement practices of immigration laws have led to the potential for alien exploitation and human trafficking. The American Samoa legislature is proposing changes that may address these issues, but it is too soon to tell what impact these changes, if passed, will have. U.S. officials state that the potential risk to the rest of the United States from American Samoa's current immigration operations is illegal immigration into the rest of the United States as a result of travelers obtaining false documentation, such as a CI, in American Samoa. While Department of State officials are aware of allegations of illegal immigration from aliens fraudulently obtaining CIs, and are working with law enforcement officials in American Samoa on an ongoing investigation into such allegations, this investigation will address the security of the process for obtaining U.S. passports and will not address the reported vulnerabilities in the process for issuing CIs. U.S. agencies have not performeda risk assessment to determine the threat, vulnerabilities, and consequences associated with aliens using false documents to travel to the rest of the United States from American Samoa. Performing a risk assessment could better position U.S. agencies to understand the extent of threats, vulnerabilities, and consequences associated with the use of CIs, and better inform decisions on which documents would be considered acceptable for those wishing to travel to the rest of the United States from American Samoa.
|
The term microcap security is not defined in the federal securities laws. Microcap securities include penny stocks and generally describe the low-priced securities of companies with market capitalizations of less than $300 million. Prices of microcap securities may be quoted on the NASD over-the-counter (OTC) Bulletin Board, in the National Quotation Bureau’s Pink Sheets, or on the Nasdaq Small Cap Market. Public information on microcap securities is limited; often a small number of broker-dealers dominate trading, making the securities more susceptible to fraud. Microcap fraud is typically associated with “pump and dump” schemes involving high pressure sales tactics designed to induce investors to purchase relatively worthless stocks in which the firm or other insiders hold a large inventory. When successful, these high pressure sales tactics result in an increase in the price of the targeted stock (pump). Insiders then sell (dump) their shares, sometimes realizing large profits at the expense of public investors. A variety of other fraudulent practices are also used as part of these schemes, including “bait and switch” tactics, unauthorized trading, failure to execute sell orders, and excessive markups or price increases. Firms investigated for microcap fraud have typically been owned or controlled by individuals with ties to other firms with a history of stock fraud. The securities markets, of which microcap securities are a part, are regulated by SEC, industry SROs, and state securities regulators. The SROs monitor members, including individuals and firms, for compliance with federal and SRO requirements. Among its responsibilities, SEC inspects SRO compliance programs for adequacy and conducts examinations of broker-dealers, including microcap firms. The states license firms and individuals to operate in their jurisdictions. Many states also conduct on-site examinations of broker-dealers. To centralize broker licensing and registration, NASD and the North American Securities Administrators Association (NASAA) established CRD in 1981. The database was designed to provide a more efficient licensing and registration process by eliminating redundant state reporting requirements. Operated by NASD Regulation Inc. (NASDR), CRD’s centralized computer system has allowed individual brokers and firms to satisfy both state and NASD reporting requirements. Over the years, however, CRD’s role has expanded to serve several other regulatory functions, such as gathering information for federal, state, and SRO enforcement and examination purposes, including identifying problem brokers or firms. CRD has also become the primary source of information for NASD’s public disclosure program. Among other things, this program provides investors with information on the professional background, business practices, and conduct of NASD member firms and their brokers. The information is available via NASDR’s toll-free telephone information service (hotline) or its Internet web site. To determine the status of SEC and SRO actions on recommendations in our and SEC reports that address issues related to microcap stock fraud, we reviewed SEC, NASD, and New York Stock Exchange (NYSE) documents that report on their respective actions. We also interviewed officials of the SEC Divisions of Enforcement and Market Regulation, SEC Office of Compliance Inspections and Examinations, the Department of Justice, NASDR, NYSE, NASAA, and Securities Industry Association. In addition, we analyzed SEC and SRO data on examinations completed, customer complaints, and disciplinary actions taken from 1992 through 1997. We did our fieldwork between February and July 1998 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from SEC. These comments are discussed at the end of this letter and are reprinted in appendix II. SEC and the SROs have taken actions that respond to many of the recommendations in our reports that address issues related to microcap stock fraud. The reports with recommendations that were acted on focused on penny stock fraud; unscrupulous brokers (brokers who have committed a significant breach of sales practice rules or have a history of repeated sales practice violations); and NASD’s toll-free telephone hotline. Our 1993 report on penny stock fraud recommended that SEC require NASD to (1) provide callers using its toll-free telephone hotline with information on final arbitration awards and (2) identify and examine high-risk branch offices of penny stock broker-dealers. NASD’s public disclosure program, which includes its toll-free telephone hotline, now provides information on, among other things, all consumer-initiated arbitrations that are pending or have been settled (for $10,000 or more) and final arbitration decisions that resulted in an award to the customer. NASD district offices also target branch offices for review based on complaints of customers, termination of registered representatives for cause, and transactions in microcap stocks. In addition, NASD is developing an automated risk-based approach to examination scheduling to identify broker-dealers and branch offices for examination. Our 1994 report on unscrupulous brokers recommended that SEC impose a permanent industry bar, with no opportunity for reentry, on certain problem brokers and ensure that CRD includes SRO formal disciplinary actions as well as information on customer complaints and their dispositions. SEC clarified in September 1994 that, absent extraordinary circumstances, persons subject to bars with no provision for readmission to the securities industry would be unable to establish that the public interest was served by allowing their reentry. Also, SEC officials said that they have begun an inspection of NASDR that will review actions taken on reentry applications. In addition, the NASD public disclosure program now discloses SRO formal regulatory actions as well as customer complaints and their disposition. Our 1996 report on the NASD hotline recommended that SEC encourage NASDR to (1) publicize its hotline number to more investors, such as by including the number on account-opening documents; (2) provide hotline callers with all relevant CRD disciplinary-related information or, at a minimum, inform them that the information is available from most state regulators; (3) make disciplinary-related information directly available to investors through the Internet; and (4) ensure that the CRD information provided to callers is disclosable and complete. Addressing these recommendations, SEC approved an NASD rule on September 10, 1997, that requires members to provide customers, at least annually, with written information on the NASDR hotline telephone number, its Internet address, and the availability of a brochure describing NASD’s public disclosure program. In addition to informing investors of the availability of information from state regulators, as of March 16, 1998, NASD reports information related to pending and final disciplinary actions, civil judgments, arbitration decisions, pending customer complaints, criminal convictions, settlements of $10,000 or more, and bankruptcies. On the same date, CRD information also became available to the public over the Internet, with disciplinary information available via electronic mail (discussed below). To better ensure the quality of the CRD data disclosed, NASD now requires an additional review prior to data input, has instituted a statistical quality-control process to measure the accuracy of disclosures, and has added a requirement for periodic examination of data by data quality professionals. SEC and the SROs have reported taking actions that respond to many recommendations in two SEC reports that address issues related to microcap stock fraud. The first report addressed the sales practice oversight of nine large broker-dealers (the large firm project report). The second report followed up on the first and focused on firms with problem brokers (the sales practice sweep report). The 1994 large firm project recommended that SEC and the SROs devote additional resources to sales practice examinations and to identifying and prosecuting problem brokers. The large firm report also recommended that (1) the SROs disclose all pending disciplinary actions, (2) NASD require its members to report customer complaints quarterly, (3) SROs enhance their tracking of regulatory filings related to disciplinary actions and terminations and sanction firms for failing to promptly and accurately file required forms, (4) SEC take action to implement uniform policies governing liability for information provided in regulatory filings, and (5) the securities industry adopt a mandatory continuing education requirement. In response to the large firm report recommendations, federal, state, and self regulators undertook the sales practice sweep. As part of the sweep, 101 small- to medium-size broker-dealers were examined, focusing on the sales practices of selected problem brokers and the hiring and supervisory practices of the firms employing them (discussed further below). As a result of the sweep, 28 firms and 23 brokers were referred for enforcement action. In response to both the large firm project and sweep report recommendations, SEC and the SROs reported increasing their examination focus on sales practices and hiring and supervisory practices. In addition, SEC drafted several new examination modules to further address microcap and penny stock fraud, sales practices, and hiring and supervising problem brokers. SEC consolidated its SRO inspection and broker-dealer examination programs and created a separate office of broker-dealer examination oversight to expand and give greater focus to the broker-dealer examination program. As discussed above, NASD is developing an automated risk-based examination system. Furthermore, NYSE added questions and procedures to its examinations and implemented a surveillance system to track brokers with disciplinary histories. Also in response to the large firm report recommendations, SROs are now required to report all formal investigations as well as pending and final disciplinary actions to CRD for public disclosure. In addition, NASD members are required to report certain customer complaints within 10 business days and summary data on written customer complaints each quarter. In August 1994, SEC requested that the SROs closely monitor the timeliness of required filings and increase sanctions when noncompliance is discovered. In April 1998, NASD proposed a rule that would provide members with qualified immunity for statements made in good faith in filings related to disciplinary actions and terminations. In 1995, SEC approved a uniform industry continuing education program that requires periodic training in regulatory matters and ongoing programs by firms to keep brokers up to date. A permanent continuing education council was created to recommend the specific content of the curriculum and to monitor the program. The 1996 SEC sales practice sweep report recommended various voluntary best practices for hiring and supervising problem brokers and follow-up examinations for problem firms. As previously discussed, SEC and the SROs have reported increasing their examination focus on firms with problem brokers, improving the selection of firms for examination, and drafting new examination modules. The new modules should provide SEC a means of gauging adherence to the best practices. Also, in April 1997, NASD and NYSE issued a joint notice to members that encouraged the adoption of the best practices. Although adoption of these practices was voluntary, the notice provided guidance on heightened hiring and supervisory procedures for brokers with prior disciplinary histories, customer complaints, or arbitrations and detailed overall member supervisory responsibility under existing rules. The Securities Industry Association adopted a similar set of best practices. Although the actions taken have responded to many of our and SEC report recommendations, for some other recommendations, actions have not been completed. Actions have not been completed on our recommendations related to the migration of unscrupulous brokers from the securities industry; modernization of CRD to allow regulators to more easily monitor brokers with disciplinary histories; and ability of SEC to identify, across firms, trends in violations found during its broker-dealer examinations. Also, SEC’s recommendation that would require disclosures to customers on the availability of broker disciplinary information prior to account activity has not been implemented. Our report on unscrupulous brokers discussed the potential for brokers barred from the securities industry to migrate to other financial services industries, such as banking and insurance. As a result, we recommended that the Secretary of the Treasury work with SEC and other financial regulators to (1) increase disclosure of CRD information so that regulators can consider a broker’s disciplinary history in allocating examination resources and employers can use the information in making hiring decisions and (2) determine whether legislation or additional reciprocal agreements between SEC and other financial regulators are necessary to prevent the migration of unscrupulous brokers to other financial services industries. In 1996, the Office of the Comptroller of the Currency, Board of Governors of the Federal Reserve System, and Federal Deposit Insurance Corporation proposed a rule that would have required banks to report the hiring of brokers to CRD and for brokers hired by banks to take NASD qualification examinations. Among other things, the filings required by the proposed rule would have allowed bank regulators to consider a broker’s disciplinary history in allocating examination resources, thereby, helping address the migration of unscrupulous brokers to the banking industry. To date, the proposed rule has not been finalized. In June 1998, NASD expressed concern to the banking regulators over bank employees taking the NASD examination. NASD communicated that the use of its examinations would not test for knowledge of bank rules and regulations. NASD recommended that new bank-specific examinations be developed and offered to assist in developing them. No further agreements on this point have been reached. In the absence of a rule change, no reporting has occurred and thus no record exists of the movement of unscrupulous securities brokers to banks or unregistered affiliates of banks. Completion of CRD’s modernization (discussed further below) will not ensure effective surveillance of the migration of these brokers to banks because banks are not required to report to CRD. Recommendations in our unscrupulous broker and hotline reports addressed the need to modernize CRD to allow regulators to more easily monitor brokers with disciplinary histories and to improve public access to broker disciplinary information, including Internet access. Although many actions have been taken on these recommendations, including limited Internet access, the CRD modernization is not yet complete. As previously discussed, CRD’s original role as the securities industry’s centralized licensing and registration system has expanded and now includes numerous other regulatory and disclosure functions. CRD’s original technology, however, was not able to accommodate this expansion. As a result, NASD began a redesign of CRD in 1992. This redesign has taken longer than expected, partly because of a switch in 1997 to technology that would allow Internet access. When completed, regulatory components of the new system will provide (1) automatic reports to regulators when certain predefined events occur, such as multiple customer complaints against a broker; (2) greater detection of late, deficient, or missed report filings; and (3) customized analytical capabilities to help regulators identify industry compliance trends, including those associated with specific problem brokers and firms. Also when completed, investors will be able to view broker and firm disciplinary information while on line. Currently, this information can be requested on line, but the response is provided via electronic mail. Full implementation of all system improvements, including enhanced regulatory functions and full Internet access, is scheduled for late 1999. Our 1991 report on SEC oversight of industry sales practices recommended that the agency explore ways to record and maintain information on the number of each type of violation found during on-site examinations of broker-dealers and, as one option, include this information in its examination tracking system, called the Examination Activity Tracking System (EATS). The intent of our recommendation was to address SEC’s inability to identify, across firms, trends in violations found during examinations that could warrant greater regulatory attention. Having such a capability would enhance the agency’s ability to more efficiently and effectively target its resources. According to SEC, its planned replacement of EATS with the Super Tracking and Reporting System (STARS) will allow headquarters and regional office staff to identify and analyze trends in violations. For example, SEC staff said they will be able to query the system to determine the number of firms within a state or across the United States that have been cited for specific violations, such as those related to books and records violations or specific types of fraudulent conduct. Also, according to SEC, in order to gather more information about the significance and extent of violations found in examinations, the full text of all reports will be stored on a computerized system, called Zyindex, that will enable staff to search all reports using key words and to compile an analysis of the information. SEC staff told us that implementation of STARS and Zyindex is scheduled to begin in the fall of 1998. If implemented as described, taken together, these enhanced capabilities would be consistent with our recommendation that SEC be able to analyze, across firms, trends in violations found during its examinations of broker-dealers. SEC’s 1994 large firm report recommended that information on the availability of a broker’s disciplinary history via NASD’s toll-free hotline be disclosed to investors before any activity occurs in their accounts. Our hotline report suggested this information could be included on account-opening documents or account statements. Investors could use such information to protect themselves against unscrupulous brokers. In 1992, SEC’s penny stock rules had been amended to require that information on the availability of a broker’s disciplinary history via NASD’s toll-free hotline be provided to a customer before effecting any penny stock transaction with the customer. However, this rule did not cover nonpenny stock transactions (i.e., securities priced at $5 or more). On September 10, 1997, SEC approved NASD Rule 2280, which required NASD members to provide information on the availability of broker disciplinary information to customers in writing at least annually, along with the Internet web site address of the NASD public disclosure program and a statement regarding the availability of an investor brochure describing the program. However, the NASD rule does not require that the information be provided before activity occurs in an account or at account opening. Since the issuance of the SEC report, numerous additional efforts have been made to educate the public on the availability of information on a broker’s disciplinary history through NASD’s toll-free hotline and web site as well as through other information on how to invest safely. These efforts have included information made available through federal, state, and SRO Internet web sites, free publications on investing, and SEC town meetings for investors. NASD also stated that it includes the toll-free hotline number and web-site address on every disciplinary action press release and has publicized them in a multilingual radio and television public service announcement campaign, in investor fairs and seminars, and in conjunction with investor and other associations. As a result, access to this information is now more readily available and widely disseminated. Nonetheless, we believe SEC’s initial rationale for recommending that information on a broker’s disciplinary history be available to investors before any activity occurs in their accounts remains valid. SEC and the SROs have taken actions that respond to many of our and SEC report recommendations. These actions have improved the availability of registration and disciplinary information on brokers and firms, branch office audit selection, and availability and analysis of customer complaints, which should enhance regulatory oversight and investor protection. We continue to support the need to implement prior recommendations related to the migration of unscrupulous brokers, completion of CRD modernization, ability of SEC to identify trends in violations across firms, and disclosure of the availability of broker disciplinary information before account activity. Full implementation of these recommendations should further enhance regulatory oversight and investor protection. Written comments from SEC on a draft of this report are contained in appendix II. SEC and NASD also provided technical comments on the draft report, which were incorporated as appropriate. SEC said that most of SEC and our recommendations have been implemented and focused its comments on the four recommendations where actions have not been completed. SEC commented that it expects to continue working with NASD to complete the CRD upgrade. It also commented that planned enhancements to its examination tracking capabilities will enable it to identify trends in violations. If implemented as described, these enhanced capabilities would be consistent with the intent of our recommendation that SEC be able to analyze, across firms, trends in violations found during its broker-dealer examinations that could warrant greater regulatory attention. In addition, SEC explained NASD’s concerns about the banking regulators’ proposed rule on the migration of unscrupulous brokers. Finally, regarding SEC’s recommendation to disclose the availability of broker disciplinary information prior to any account activity, the agency commented that the availability of this information is now widely publicized to investors for their use before opening an account and committing to buy or sell securities. Although we agree that this publicity is valuable, we also believe that SEC’s original recommendation to require the disclosure of the availability of this information directly to individual investors when they are about to open an account would provide the information to the investor when it is of immediate use. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 5 days after the date of issuance. At that time, we will send copies of this report to the Chairman, SEC and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-8678 or Cecile O. Trop, Assistant Director, at (312) 220-7600 if you or your staff have any questions. Major contributors to this report are listed in appendix III. On February 17, 1998, the Securities and Exchange Commission (SEC) adopted amendments to Regulation S that are designed to prevent abuses related to offshore offerings of equity securities of domestic issuers. Regulation S provides a safe harbor from the registration requirements of the Securities Act of 1933 for offers and sales of securities by both foreign and domestic issuers that are made outside the United States. In abusing this safe harbor, issuers have illegally distributed securities in the United States and, in doing so, have denied investors the protections provided by registration under the Securities Act. Under the amendments, equity securities of U.S. issuers that are sold offshore under Regulation S would be classified as “restricted securities” within the meaning of Rule 144 under the Securities Act, and the period during which these securities cannot be distributed in the United States would be lengthened from 40 days to 1 year. SEC also adopted amendments that would affect applicable reporting requirements along with other amendments intended to prevent further abuses of Regulation S. On May 21, 1998, SEC proposed amendments to Rule 504 of Regulation D that would require all securities issued under the rule to be “restricted securities” and would allow their resale only after certain criteria were met. Rule 504 allows companies to raise up to $1 million per year in “seed capital” without complying with Securities Act registration requirements. SEC is concerned that the freely tradable nature of securities issued in Rule 504 offerings may have facilitated a number of fraudulent market manipulations through the over-the-counter (OTC) Bulletin Board and National Quotation Bureau’s Pink Sheets. The comment period on these amendments expired on July 27, 1998. On February 17, 1998, SEC proposed amendments to Rule 15c2-11 that would require all broker-dealers to review issuer information before publishing quotations on non-Nasdaq OTC securities and require broker-dealers quoting a price to annually review updated issuer information. Rule 15c2-11 currently operates so that just the first market maker in these stocks is required to review basic issuer information before publishing quotations for that issuer’s securities. Other market makers may “piggyback” on the first market maker’s quotes and publish quotes after 30 days without reviewing issuer information. Retail brokers “hyping” a microcap security may refer to a market maker’s quotation when marketing a security to a potential customer. SEC is concerned that most market makers for unlisted securities publish quotations without reviewing current financial and other information on the issuer. The comment period for these amendments expired on April 27, 1998. On February 17, 1998, SEC proposed amendments to Form S-8 and related rules that would restrict the use of the form for the sale of securities to consultants and advisors. Form S-8 is the short-form registration statement for offers and sales of a company’s securities to its employees, including consultants and advisors. The amendments are designed to deter misuse of this form either by avoiding the Securities Act requirements that apply to securities sales to nonemployees or by issuing securities as compensation to stock promoters. Other proposed amendments would permit Form S-8 to be used by employees’ family members for the exercise of stock options that employees give as gifts to their families. The comment period for these amendments expired on April 27, 1998. Currently, the definition of “penny stock” excludes securities that, among other things, are priced at $5 or more per share. However, SEC believes that some broker-dealers have circumvented the rules by pricing securities above the $5 threshold. The SEC Chairman has testified that SEC is considering whether to recommend changing the penny stock rules to raise the price threshold to cover the types of securities that might be involved in microcap stock fraud. Under a plan being discussed by the securities industry and SEC, the National Securities Clearing Corporation (NSCC) would consolidate a variety of data received from clearing firms, SROs, and other sources. NSCC would use these data to identify suspicious activity by broker-dealers (introducing brokers (IB) and others), and this information would be made available to regulators. On September 10, 1997, SEC approved NASD Rule 2280, effective January 1, 1998, which requires NASD members that carry customer accounts to provide customers in writing, at least once each calendar year, the NASD public disclosure program hotline number and web site address as well as a statement regarding the availability of an investor brochure describing the public disclosure program. On December 2, 1996, SEC approved NASD Rule 2211, effective December 2, 1996, that imposes time restrictions and disclosure requirements on telephone calls to customers by NASD members and their associated persons. On September 8, 1995, SEC approved NASD Rule 3070 for reporting customer complaint information and other specified events to NASDR. The rule, which became effective on October 15, 1995, requires that NASD members report to NASDR if any of 10 specified events occur and that they provide quarterly summary statistical information on written customer complaints. On June 9, 1995, SEC approved NASD Rule 3110(g), or “cold calling” rule, effective June 9, 1995, consistent with rules of the Federal Communication Commission promulgated under the Telephone Consumer Protection Act, which require telemarketers to establish and maintain a list of persons who have requested that they not be contacted by the telemarketer (do-not-call list). On February 8, 1995, SEC approved NASD’s membership and registration Rule 1120 to implement the Securities Industry Continuing Education Program, which became effective July 1, 1995. The rule required all registered persons to take computer-based training within 120 days of their second, fifth, and tenth registration anniversaries. Effective July 1, 1998, this training is required every 3 years. On April 17, 1998, SEC approved an NASD amendment to Rule 3010 that requires an NASD member firm to tape record conversations between customers and registered representatives if it hired a significant percentage of individuals (dependent on firm size) from disciplined firms. On January 20, 1998, SEC approved an amendment to Interpretive Memo 8310-2 (Release of Disciplinary Information), which allowed for the release of additional disciplinary information that is required to be disclosed pursuant to amended forms U-4, U-5, and BD, including, but not limited to (1) customer-initiated arbitrations that are pending or settled (for $10,000 or more), (2) civil proceedings and written customer complaints (within certain dollar limits), (3) current investigations involving criminal or regulatory matters, and (4) bankruptcies less than 10 years old. The rule also was amended to allow NASDR to respond to electronic requests for information. Amendments in 1993 allowed for the release of pending formal SRO disciplinary actions, criminal indictments, civil judgments, and final judgments in arbitration decisions. In July 1998, NASD filed proposed Rules 2315 and 2360, which would, respectively, require NASD members to (1) review current issuer financial statements prior to recommending a transaction to a customer in an unlisted equity security and (2) provide a disclosure statement to a customer on each customer’s confirmation following any trade of an unlisted equity security. SEC has not yet published the proposals for public comment. In July 1998, NASD filed an amendment to Rule 6530 to limit quotations on the OTC Bulletin Board to the securities of issuers that are current in reports to be filed with SEC or other regulatory authorities and a companion Rule 6540 that would prohibit an NASD member from quoting a security on the OTC Bulletin Board unless the issuer’s filings are current. SEC has not yet published the proposals for public comment. In July 1998, NASD filed a proposed interpretation to Rule 3010 and an amendment to Rule 1060 that would limit the kinds of cold calls that may be made by unregistered persons and impose obligations on member firms to supervise these employees. However, in its description of the proposal, NASD states that “the proposed rule change would permit members to use third-party telemarketing firms” to make cold calls on behalf of the member firm. SEC has not yet published the proposal for public comment. On April 21, 1998, NASD filed proposed Rule 1150 with SEC to provide NASD members with qualified immunity in arbitration proceedings for statements made in good faith in certain required disclosures filed with NASD on forms U-4 and U-5, the uniform registration and termination notices for registered persons, respectively. The comment period expired on June 19, 1998. On November 21, 1997, SEC published for comment a proposed change to NASD Rule 3230 that would require clearing firms to (1) forward customer complaints about an IB to the IB and the IB’s designated examining authority, (2) notify complaining customers that they have the right to transfer their accounts to another broker-dealer, (3) provide IBs with a list of exception reports to help them supervise their activities, and (4) assume liability for any mistakes or fraud made by an IB that issues checks drawn on the clearing firm’s account. The comment period expired on December 22, 1997. NYSE proposed an amendment to Rule 382 on September 16, 1997, that is similar to proposed NASD Rule 3230. The comment period has expired. The following is GAO’s comment on the August 28, 1998, letter from the Securities and Exchange Commission. Based on discussions with SEC staff to further clarify the intent of our recommendation and the capabilities of SEC’s planned system enhancements, we revised the text of the report. The report now recognizes that, if implemented as described, SEC’s planned enhancements to its systems capabilities would be consistent with the intent of our recommendation. The intent of our recommendation was that SEC be capable of identifying, across firms, trends in violations found during its broker-dealer examinations that could warrant greater regulatory attention. Rosemary Healy, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the actions taken by the Securities and Exchange Commission (SEC) and the self-regulatory organizations (SRO) in response to GAO's and SEC's recommendations to reduce microcap fraud. GAO noted that: (1) SEC and the SROs have taken, or reported taking, actions that respond to many of the recommendations in GAO's and SEC reports that address issues related to microcap stock fraud; (2) in responding to these recommendations, actions have been taken to: (a) expand the disclosure of and public access to broker disciplinary information; (b) improve National Association of Securities Dealers branch office examination selection; (c) provide more focused sales practices examinations; (d) improve compliance with industry reporting requirements; and (e) implement a continuing professional education requirement for broker-dealers; (3) these actions should enhance regulatory oversight of microcap stock firms and help provide investors with additional protections against abusive practices by such firms; (4) actions have not been completed that would respond to other recommendations related to the: (a) migration of unscrupulous brokers from the securities industry to other financial services industries; (b) modernization of the central registration database to improve oversight of problem brokers and public access to broker disciplinary histories; (c) ability of SEC to identify, across firms, trends in violations found during its broker-dealer examinations; and (d) provision of information on the availability of broker disciplinary histories before activity occurs in an account; and (5) completing actions on these recommendations would further enhance regulatory oversight and investor protection.
|
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies’ major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the President’s budget, provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. The mission of the Department of Defense is to support and defend the Constitution of the United States; provide for the common defense of the nation, its citizens, and its allies; and protect and advance U.S. interests around the world. Defense operations involve over $1 trillion in assets, budget authority of about $310 billion annually, and about 3 million military and civilian employees. Directing these operations represents one of the largest management challenges within the federal government. This section discusses our analysis of DOD’s progress in achieving outcomes and the strategies that DOD has in place, particularly human capital and information technology, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which DOD provided assurance that the performance information it is reporting is credible. In general, the extent to which DOD has made progress in achieving the six outcomes is unclear. In our opinion, one of the reasons for the lack of clarity is that most of the selected program outcomes DOD is striving to achieve are complex and interrelated and may require a number of years to accomplish. This condition is similar to what we reported last year on our analysis of DOD’s fiscal year 1999 performance report and fiscal year 2001 performance plan. Further, with the new administration, DOD is undergoing a major review of its military strategy and business operations, which may result in changes to the way DOD reports performance information. The extent to which the Department has made progress toward the outcome of maintaining U.S. technological superiority in key war-fighting capabilities is difficult to assess. DOD’s performance goal for this outcome is to transform U.S. military forces for the future. As we reported last year, some of the performance goal’s underlying measures—such as procurement spending and defense technology objectives—do not provide a direct link toward meeting the goal, thus making it difficult to assess progress. DOD’s performance report does not reflect concerns raised within the Department about the adequacy of its strategy and institutional processes for transforming forces. We noted in a prior report that a transformation strategy is presented in the former Secretary of Defense’s 2001 Annual Report to the President and the Congress. However, the strategy does not clearly identify priorities or include an implementation plan and outcome- related metrics that can be used to effectively guide the transformation of U.S. forces and assess progress. This topic is currently being reviewed by the new administration. As we reported, a 1999 Defense Science Board study had recognized the need and called for such an explicit strategy, or master plan; a roadmap; and outcome-related metrics to assess progress. Also, a joint military service working group identified a need for a comprehensive strategy as an issue that the 2001 Quadrennial Defense Review must address. Further, the Defense Science Board, Joint Staff and unified command officials, joint military service working group, and others raised concerns about the ability of DOD’s current institutional processes to turn the results of transformation initiatives into fielded capabilities in a timely manner. These processes—which include DOD’s planning, programming, and budgeting system and weapons acquisition system— focus on near- or mid-term requirements and do not foster the timely introduction of new technologies to operational forces. For each of the supporting performance measures, DOD’s report describes data collection and verification measures. However, our work in this area has not addressed the reliability of DOD’s data. Thus, we are unable to comment on the extent to which the reported performance information is accurate. DOD’s performance measures do not adequately indicate its progress toward achieving the outcome of ensuring that U.S. military forces are adequate in number, well qualified, and highly motivated. Therefore, we cannot judge the level of progress DOD has made in this area. DOD’s performance goal for this outcome is to recruit, retain, and develop personnel to maintain a highly skilled and motivated force capable of meeting tomorrow’s challenges. DOD’s performance measures still do not fully measure how well DOD has progressed in developing military personnel or the extent to which U.S. military forces are highly motivated. Although DOD’s report identifies specific goals for recruiting and retention, the Department does not include human capital goals and measures aimed specifically at tracking the motivation or development of its personnel. The level of progress toward meeting specific targets in the areas of enlisted recruiting and retention is mixed. The Air Force failed to meet its targets for first- or second-term retention, and the Navy did not meet its target for first-term retention. While most reserve components met or came in under their targets for enlisted attrition, the Army Reserve did not stay within its attrition target. On the positive side, the services met or exceeded their targets for enlisted recruiting and recruit quality. However, DOD’s report showed that the target for active enlisted recruiting was revised downward, enabling DOD to meet a goal it might otherwise have been unable to achieve. If such adjustments become commonplace, the same kind of force shaping problems that resulted from the intentional restriction of new accessions during the 1990s drawdown could result. Still other targets, such as for enlisted retention, are set at such aggregate levels that they could mask variations in retention by occupational area and skill levels, which would limit achieving the outcome of ensuring that U.S. military forces are adequate in number, well qualified, and highly motivated. As such, the enlisted retention goal provides only a partial measure of the military’s ability to retain adequate numbers of personnel. DOD’s performance report realistically identified the likelihood of continued challenges in recruiting for the military services and in retention for the Navy and the Air Force. But it did not devote significant attention to identifying specific reasons why DOD missed certain targets. Likewise, with the exception of the enlisted recruiting area, the report did not identify specific planned actions that DOD or the services will take to assist them in meeting future performance targets. For enlisted recruiting, however, the services identified several actions to help them cope with this challenge. For example, the Army and the Navy have increased funding for recruiting and plan to offer enlistment bonuses of up to $20,000. They also plan to continue allowing recruits to choose a combination of college fund and enlistment bonuses. The Army plans to experiment with innovative ways to expand the market for new recruits through programs like College First and GED Plus. And, the Air Force has instituted a college loan repayment program, increased enlistment bonuses to $12,000, and added more recruiters. With regard to retention, the Department’s performance report discusses generally the difficulties of the current retention environment and the fiscal year 2000 enlisted retention challenges. However, the report contains little clear articulation of specific actions or strategies being taken to improve future retention. For example, the report noted that the Navy has established a Center for Career Development that is chartered to focus on retention, providing the fleet the necessary tools to retain Navy personnel. However, the performance report does not elaborate on what those tools are or how they are being enhanced. Similarly, the Air Force indicated that it held two retention summits in fiscal year 2000 and that initiatives resulting from those summits will facilitate achievement of fiscal year 2001 retention targets. However, the report does not cite specific initiatives that would be taken or when they would be put into place. DOD expects that fiscal year 2001 will continue to present retention challenges for the services’ reserve components. The report, however, did not identify any specific actions or initiatives that would be taken to help address the challenge. Finally, for each of its performance measures, DOD’s report describes the data flow used to produce DOD’s assessment. The procedures used to collect, verify, and validate the data cited in the report provide reasonable assurance that the information is accurate and reliable. The level of progress that DOD has made toward the outcome of maintaining combat readiness at desired levels is unclear. DOD’s performance goals for this outcome are to maintain trained and ready forces and have strategic mobility. Although DOD has met some performance measure targets for both goals, other targets are incomplete, have been lowered, or have not been met, thus making an accurate assessment of progress difficult. For example, DOD reported meeting its force-level targets for the performance goal of maintaining trained and ready forces. However, the targets do not provide a complete picture of the forces needed to respond to a full spectrum of crises, to include fighting and winning two major theater wars nearly simultaneously.DOD’s metric includes only combat forces for each service, and not the necessary support forces. In the Army’s case, this means that DOD’s metric captures only 239,000 of the 725,000 forces the Army projects it would deploy to two wars. The targets also do not capture other important attributes beyond the size of the force, such as the extent to which DOD has made the best possible use of its available resources. For example, DOD’s plan does not set results-oriented goals for integrating the capabilities of the active, National Guard, and Reserve forces—even though each of these components is essential for mission effectiveness. As another example, DOD still has not been able to achieve its tank-mile training target of 800 miles of training per tank, conducted at various home stations, in Kuwait, and in Bosnia. Although DOD came closer to meeting the target in fiscal year 2000 than it did in fiscal year 1999—101 (17 percent) more tank miles—it still fell short by nearly 100 training miles per tank. DOD reported that it failed to meet the targets because units were not available for training, units used training simulators instead of actual training, and resources were diverted from field exercises to other high priority needs such as upgrades and maintenance of key training ranges. While our recent work shows this to be true, we reported that the movement of training funds for other purposes had not resulted in the delay or cancellation of planned training events in recent years. Further, data are not as reliable as they could be. DOD and the Army define the 800 tank-mile measure differently. DOD’s definition includes tank-training miles conducted in Kuwait and Bosnia, while the Army’s home station training measure excludes those miles. Using the Army’s home station training measure, it conducted 655 miles of training in fiscal year 2000, which is 145 miles or 18 percent short of its budgeted home station training goal. Figure 1 compares budgeted and actual Army home station tank training miles from fiscal year 1997 to fiscal year 2000. For strategic mobility, DOD reported that it met targets for two of three underlying measures: airlift capacity, and land- and sea-based prepositioning. However, in the area of airlift capacity, DOD revised the performance targets downward from those that had been set in prior performance plans and last year’s performance report. DOD reported that it revised the new targets to reflect updates to the planning factors for C-5 aircraft wartime performance. While it is appropriate for DOD to revise targets, as necessary, we reported that the new targets are significantly less than goals established in a 1995 Mobility Requirements Study Bottom- Up Review Update and even lower than a newly established total airlift capacity requirement of 54.5 million-ton miles per day established in DOD’s Mobility Requirements Study 2005, issued in January 2001. DOD’s performance report contains targets of a total airlift capacity of 45.4 million-ton miles per day for military aircraft and the Civil Reserve Air Fleet, with 24.9 million-ton miles per day coming from military aircraft. By comparison, DOD’s airlift capacity requirements are about 50 million-ton miles per day for total airlift capacity, with nearly 30 million-ton miles per day coming from the military. DOD’s performance report does not explain how these new targets were set or how they differed from prior years’ targets. It is also unclear whether or how DOD intends to meet the higher requirement of 54.5 million-ton miles per day. Because DOD reported that it had met its force-level targets, it plans no significant changes or strategies in force structure for fiscal year 2001. However, we believe that force-level targets could be more complete and meaningful if they included associated support forces with existing combat unit force levels. For example, the lack of any target setting for Army support forces masks the Army’s historic problem in fully resourcing its support force requirements, as well as more recent steps the Army has taken to reduce its shortfall level. With respect to tank training strategies, in response to our recent recommendation, DOD agreed to develop consistent tank training performance targets and reports to provide the Congress with a clearer understanding of tank training. Also, DOD has initiated a strategy to more clearly portray the number of tank training miles driven, and the Department is moving toward becoming more consistent with the Army’s 800-tank mile measure. However, as stated above, DOD continues to include tank-training miles conducted in Kuwait in its definition of the measure, while the Army excludes those miles. DOD reports that the problems encountered in meeting fiscal year 2000 tank training objectives are not, for the most part, expected to recur in fiscal year 2001. However, the problems DOD describes are not unique to fiscal year 2000. Army units are now in their sixth year of deployments to the Balkans, which, as DOD stated, affects its training availability. Further, in at least 6 of the past 8 fiscal years (1993 through 2000), DOD has moved funds from division training for other purposes. For the most recent of those years—the 4-year period from fiscal years 1997 through 2000—DOD moved a total of almost $1 billion of the funds the Congress had provided for training. DOD reports that an Army management initiative implemented in fiscal year 2001 will limit the reallocation of funds. However, at the time of our work, it was too early in the fiscal year to assess the initiative’s success. Further, DOD has identified strategies for strategic airlift improvement, such as including a C-17 aircraft procurement program to provide additional airlift capacity and upgrading of C-5 aircraft components. We recently reported that the C-5 upgrades, however, were fiscal year 2000 proposals that are waiting to be funded in the 2001-2012 timeframe. Thus, in the near term, this strategy would not likely result in significant increases in capacity. For each of its performance measures, DOD’s fiscal year 2000 performance report discussed the source and review process for the performance information. With one exception involving DOD’s En Route System of 13 overseas airfields, DOD’s data appear to be reasonably accurate. The En Route System is a critical part of DOD’s ability to quickly move the large amounts of personnel and equipment needed to win two nearly simultaneous major theater wars, as required by the National Military Strategy. However, DOD’s performance report excludes data on En Route System limitations from the measures it uses to assess performance in strategic mobility, resulting in an incomplete picture of its capabilities. Rapid mobilization of U.S. forces for major theater wars requires a global system of integrated airlift and sealift resources, as well as equipment already stored overseas. The airlift resources include contracted civilian and military cargo aircraft and the 13 En Route System airfields in Europe and the Pacific where these aircraft can land and be serviced on their way to or while in the expected war zones in the Middle East and Korea. We learned during a recent review of the En Route System that DOD includes measures of its performance in meeting goals for aircraft, sealift, and prepositioned equipment capacities in its measures of strategic mobility capability. However, it does not include data on shortfalls in En Route System capacity, which are a major limiting factor on airlift capacity and overall performance in strategic mobility. Officials from the Office of the Secretary of Defense told us that they do not include data on En Route System shortfalls because airfield capacity has not been considered a primary criterion for measuring performance in strategic mobility. However, DOD has reported that the chief limiting factor on deployment operations is not usually the number of available aircraft but the capability of en-route or destination infrastructure to handle the ground operations needed by the aircraft. In a recently issued report, we recommended that DOD begin to include information on En Route System limitations and their effects on strategic mobility in its performance reports. DOD’s progress toward achieving the outcome of ensuring that infrastructure and operating procedures are more efficient and cost- effective remains unclear. The performance goals for this outcome are to streamline infrastructure through business practice reform and improve the acquisition process. DOD reported that it met many of its performance targets, such as disposing of property, reducing logistics response time and streamlining the acquisition workforce. However, as we reported last year, the targets did not always hold up to scrutiny; and some targets that DOD reported as met had been lowered or were not met. For example, while DOD has reported meeting its targets for public- private competitions, we have found that delays have been encountered in initiating and completing planned studies that have the potential for reducing savings expected to be realized in the near-term. Additionally, changes have been made in overall study goals, creating some uncertainties about future program direction. For example, the Department recently reduced its plan to study 203,000 positions under Office of Management and Budget (OMB) Circular A-76 to about 160,000 positions while supplementing it with a plan to study 120,000 positions under a broader approach known as strategic sourcing. Similarly, DOD reported that it had met its 99-month target cycle time for average major defense acquisition programs. However, compared to fiscal year 1999 results, the average cycle time actually increased by 2 months. We have reported numerous examples of questionable defense program schedules, such as with Army Comanche helicopter program delays. In this regard, our work has shown that DOD could benefit from the application of commercial best practices to ensure that (1) key technologies are mature before they are included in weapon system development programs, (2) limits are set for program development cycle times, and (3) decisions are made using a knowledge-based approach. As another example, DOD reported that it did not meet its cost growth measure. On average, reported costs rose in major defense acquisition programs by 2.9 percent during fiscal year 2000 compared to the goal of 1.0 percent. DOD explains the causes for the excessive cost growth but not the strategies to solve the problem. We have reported pervasive problems regarding, among other things, unrealistic cost, schedule, and performance estimates; unreliable data on actual costs; and questionable program affordability. Also, we have recommended that DOD leadership improve the acquisition of weapon systems by using more realistic assumptions in developing system cost, schedule, and performance requirements and approving only those programs that can be fully executed within reasonable expectations of future funding. DOD’s fiscal year 2000 performance report sufficiently explains why a number of performance measures were not met but does not provide clear plans, actions, and time frames for achieving them. For example, DOD reported that no systemic problems would hinder it from meeting working capital fund and defense transportation documentation targets in the future. However, DOD believes it may have difficulty meeting supply inventory goals due to continuing concerns about the impact of inventory reductions on readiness. In the report, DOD acknowledges that it may have problems meeting some targets because it must balance its infrastructure reduction initiatives with efforts to enhance quality of life, improve recruiting and retention, and transform the military to meet the challenges of the 21st century. For each of its performance measures, DOD’s report discusses the source and review process for the performance information. The data appear to be credible, with some exceptions. For example, we previously reported that unreliable cost and budget information related to DOD’s measure for the percentage of the budget spent on infrastructure negatively affects the Department’s ability to effectively measure performance and reduce costs. We also reported that significant problems exist with the timeliness and accuracy of the underlying data for the measure related to inventory visibility and accessibility. We could not assess DOD’s progress in achieving performance goals or measures because DOD’s fiscal year 2000 performance report did not include performance goals or measures for this outcome. DOD does, however, assist U.S. and foreign law enforcement agencies in their efforts to reduce the availability and use of illegal drugs. It has lead responsibility for aerial and maritime detection and monitoring of illegal drug shipments to the United States. It also provides assistance and training to foreign governments to combat drug-trafficking activities. DOD’s 2000 performance report recognized counternarcotics as a crosscutting function and outlined DOD’s responsibilities in this area. In a December 1999 report on DOD’s drug control program, we recommended that DOD develop performance measures to determine the effectiveness of its counterdrug activities and make better use of limited resources. In response to our recommendation, DOD developed a set of “performance results” that are compiled on a quarterly basis. These performance results are intended to (1) provide a useful picture of the performance results of individual projects, (2) facilitate the identification of projects that are not demonstrating adequate results, (3) allow an overall assessment of DOD’s counterdrug program’s results, and (4) describe those DOD accomplishments that directly support the performance goals delineated in the National Drug Control Strategy’s Performance Measures of Effectiveness Plan. DOD is currently refining the performance results in an effort to improve its ability to measure the success or failure of counterdrug activities. We had no basis to assess DOD’s progress in achieving the outcome of making fewer erroneous payments to contractors because DOD had no performance goals directly related to the outcome. However, this issue represents a significant problem for DOD. Under its broader goal of improving the efficiency of its acquisition processes, DOD has developed performance measures that address related contracting issues. Specifically, the 2000 performance report contains goals and measures for increasing the use of paperless transactions. However, these measures do not directly address the outcome of fewer erroneous payments. While they do reflect quantifiable measures of the levels of usage for these contracting processes, they may not directly address whether the number of erroneous payments has been reduced. On a related issue, we have reported over the last several years that DOD annually overpaid its contractors by hundreds of millions of dollars, constituting a significant problem. In February of this year, we reported that DOD contractors repaid $901 million in overpayments in fiscal year 2000 to a major DOD contract payment center. This represents a substantial amount of cash in the hands of contractors beyond what is intended to finance and pay for the goods and services DOD bought. For example, contractors returned $351 million in overpayments in fiscal year 1999 to this DOD payment center. Contractor data indicate that 77 percent of that amount resulted from contract administration actions (see fig. 2). However, DOD does not review available data on why this major category of overpayments occurs. Such a review is necessary if excess payments are to be reduced. Therefore, in our February 2001 report, we recommended that DOD routinely analyze data on the reasons for excess payments, investigate problem areas, and implement necessary actions to reduce excess payments. In responding to our recommendation, DOD stated that it would conduct an initial review of excess payment data and determine whether routine receipt and analysis of this data would be meaningful. In comparing DOD’s fiscal year 2000 performance report with its prior year report, we noted that DOD has made several improvements. For example, it added more discussion on the importance of human resources in achieving its performance objectives; summarized how its performance metrics responded to each of eight major management challenges it faces; and included a more in-depth explanation of each cross-cutting activity it is involved with, rather than just a listing of the responsible agencies. The eight major management challenges facing the Department are: Developing strategic plans that lead to desired mission outcomes. Hiring, supporting, and retaining military and civilian personnel with the skills to meet mission needs. Establishing financial management operations that provide reliable information and foster accountability. Effectively managing information technology investments. Reforming acquisition processes while meeting military needs. Improving processes and controls to reduce contract risk. Creating an efficient and responsive support infrastructure. Providing logistics support that is economical and responsive. In terms of data verification, presentation, and content, DOD’s fiscal year 2000 performance report has an effective format that is understandable to a nondefense reader. DOD also clarified some of its terminology. For example, it changed the term “performance goal” to “performance target” to remove confusion about what the annual performance goals are. The fiscal year 2000 report, however, did not address several weaknesses that we identified in the fiscal year 1999 report. For example, DOD reported nine measures and indicators to make infrastructure and operating procedures more efficient and cost-effective. We believe that these measures are insufficient to assess whether DOD is actually making progress toward streamlining its infrastructure. Some measures, such as the number of positions subject to OMB Circular A-76 or strategic sourcing reviews, generally reflect status information rather than the impact that programs are having on the efficiency and cost-effectiveness of operations. Since DOD has not changed or supplemented these measures, we continue to believe that DOD will have problems determining how effective its infrastructure reduction efforts have been. Also, we have testified that DOD has undergone a significant downsizing of its civilian workforce. In part due to the staffing reductions already made, imbalances appear to be developing in the age distribution of DOD civilian staff. The average age of this staff has been increasing, while the proportion of younger staff, who are the pipeline of future agency talent and leadership, has been dropping. As another example, DOD’s performance report has no outcome-oriented measures for working capital fund activities. The idea behind working capital funds is for activities to break even over time. Thus, if an activity has a positive net operating result one year, it will budget for a negative net operating result the next year. The measure DOD currently uses to assess its working capital fund operations is net operating results. This particular measure, however, is of little value for determining the outputs achieved for goals and services provided through the working capital fund activities. We believe that additional measures are needed to help determine operational effectiveness, particularly because these activities report about $75 billion in annual revenues associated with their operations. For example, a good measure to determine the effectiveness of the supply management activity group could be the percentage of aircraft that are not mission capable due to supply problems. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. Regarding strategic human capital management, we found that DOD’s performance report did not explain DOD’s progress in resolving human capital challenges. However, the report included a description on the importance of human resources, such as the importance of total force integration and quality of life and personnel. With respect to information security, we found that DOD’s performance report did not explain its progress in resolving its information security challenges. However, it states that specific goals, objectives, and strategies for improving DOD’s management of information can be found in the Information Management Strategic Plan (http://www.c3i.osd.mil) discussed in appendix J of DOD’s 2001 Annual Report to the President and the Congress. In addition, GAO has identified eight major management challenges facing DOD. Some of these challenges are crosscutting issues. For example, improving DOD’s financial management operations so that it can produce useful, reliable and timely cost information is essential if DOD is to effectively measure its progress toward achieving outcomes and goals across virtually the entire spectrum of DOD’s business operations. Although DOD’s performance report discussed the agency’s progress in resolving many of its challenges, it did not discuss the agency’s progress in resolving the following challenge: “Developing strategic plans that lead to desired mission outcomes.” As we reported in March 2001, sound strategic planning is needed to guide improvements to the Department’s operations. Without it, decisionmakers and stakeholders may not have the information they need to ensure that DOD has strategies that are well thought-out to resolve ongoing problems, achieve its goals and objectives, and become more results oriented. While DOD has improved its strategic planning process, its current strategic plan is not tied to desired mission outcomes. As noted in several of the other key challenges, sound plans linked to DOD’s overall strategic goals are critical to achieving needed reforms. Inefficiencies in the planning process have led to difficulties in assessing performance in areas such as combat readiness; support infrastructure reduction; force structure needs; and matching resources to program spending plans. Appendix I provides detailed information on how well DOD addressed these challenges and high-risk areas as identified by both GAO and the DOD Inspector General. Shortfalls in DOD’s current strategies and measures for several outcomes have led to difficulties in assessing performance in areas such as combat readiness, support infrastructure reduction, force structure needs, and the matching of resources to program spending plans. DOD’s fiscal year 2002 performance plan, which has yet to be issued, provides DOD with the opportunity to address these shortfalls. DOD is also in the process of updating its strategic plan through the conduct of its Quadrennial Defense Review, which sets forth its mission, vision, and strategic goals. The review provides DOD with another opportunity to include qualitative and quantitative information that could contribute to providing a clearer picture of DOD’s performance. On the basis of last year’s analysis of DOD’s fiscal year 1999 performance report and fiscal year 2001 performance plan, we recommended that the Department include more qualitative and quantitative goals and measures in its annual performance plan and report to gauge progress toward achieving mission outcomes. DOD has not as yet fully implemented this recommendation. We continue to believe that the Secretary of Defense should adopt this recommendation as it updates its strategic plan through the Quadrennial Defense Review and prepares its next annual performance plan. By doing so, DOD can ensure that it has strategies that are tied to desired mission outcomes and are well thought-out for resolving ongoing problems, achieving its goals and objectives, and becoming more cost and results oriented. As agreed, our evaluation was generally based on the requirements of GPRA; the Reports Consolidation Act of 2000; guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2); previous reports and evaluations by us and others; our knowledge of DOD’s operations and programs; our identification of best practices concerning performance planning and reporting; and our observations on DOD’s other GPRA-related efforts. We also discussed our review with agency officials in DOD’s Office of Program Analysis and Evaluation and with the DOD Office of Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member, Senate Governmental Affairs Committee as important mission areas for the agency and do not reflect the outcomes for all of DOD’s programs or activities. Both GAO, in our January 2001 performance and accountability series and high risk update, and DOD’s Inspector General in December 2000 identified the major management challenges confronting DOD, including the governmentwide high-risk areas of strategic human capital management and information security. We did not independently verify the information contained in the performance report, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of DOD’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. In a letter dated June 14, 2001, the DOD Director for Program Analysis and Evaluation provided written comments on a draft of this report. DOD indicated that its annual GPRA report provides the Congress and the public an executive-level summary of key performance results over the past budget year. DOD stated that, together, the metrics presented in its report demonstrate how DOD’s existing management practices enable it to recruit, train, equip, and field the most effective military force in the world. DOD said that we overlooked this fact in our draft report. However, DOD pointed out that future GPRA submissions will refine its performance metrics to reflect priorities of the new defense strategy, but it sees little value in adding large amounts of new measures auditors and others have proposed over the past 18 months. DOD reiterated that GPRA is not the sole venue for reporting performance results—it submits more than 900 reports annually to the Congress alone, many of which address issues highlighted in our draft report. DOD stressed that a key goal of the GPRA legislation is to increase public confidence in government and, although it does not want to mask deficiencies in how DOD manages performance, it does not want to emphasize shortfalls at the expense of true achievements. DOD stated that it would be helpful if we could provide a clearer definition of what standards of sufficiency will be applied in evaluating future submissions. Notwithstanding DOD’s statement that the metrics DOD presented in its performance report can enable it to have an effective military force, we continue to believe, for the reasons cited in our report, that DOD’s progress in achieving the selected outcomes is still unclear. As we have recently recognized in our report on major performance and accountability challenges facing DOD, our nation begins the new millennium as the world’s sole superpower with military forces second to none, as evidenced by experiences in the Persian Gulf, Bosnia, and Kosovo. We also stated that the same level of excellence is not evident in many of the business processes that are critical to achieving DOD’s mission in a reasonably economical, efficient, and effective manner. A major part of DOD’s performance report focuses on outcomes related to these processes, the results of which are critical to DOD’s ability to maintain its military capability. As we reported in last year’s assessment, we agree that the answer is not to simply measure more things in more detail. However, in many instances, for the outcomes identified by the Committee, DOD’s report does not discuss strategies for achieving unmet goals and does not fully assess its performance. We believe that the best test of reasonableness or sufficiency to evaluate DOD’s future progress resides in the requirements of GPRA itself, which requires, among other things, agencies to explain and describe, in cases where a performance goal has not been met, why the goal was not met. The requirement to submit a fiscal year 2002 performance plan, which DOD has yet to issue, also provides DOD with the opportunity to address these shortfalls. In that regard, we have issued guidance that outlines approaches agencies should use in developing performance plans. These actions would place DOD in a position of continuously striving for improvement. Appendix II contains DOD’s comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. Copies will also be made available to others on request. If you or your staff have any questions, please call me at (202) 512-4300. Key contributors to this report were Charles I. Patton, Jr.; Kenneth R. Knouse, Jr.; Elizabeth G. Mead; Cary B. Russell; and Brian G. Hackett. The following table identifies the major management challenges confronting the Department of Defense (DOD), which include the governmentwide high-risk areas of strategic human capital management and information security. The first column of the table lists the management challenges that we and/or DOD’s Inspector General (IG) have identified. The second column discusses what progress, as discussed in its fiscal year 2000 performance report, DOD made in resolving its challenges along with our assessment. We found that DOD’s performance report discussed the agency’s progress in resolving many of its challenges but that it did not discuss the agency’s progress in resolving the following challenges: Strategic Planning, Other Security Concerns, and Health Care.
|
This report reviews the Department of Defense's (DOD) fiscal year 2000 performance report required by the Government Performance and Results Act of 1993 and assesses the Department's progress in achieving selected outcomes that were identified as important mission areas for DOD. GAO found that shortfalls in DOD's current strategies and measures for several outcomes have led to difficulties in assessing performance in areas such as combat readiness, support infrastructure reduction, force structure needs, and the matching of resources to program spending plans. DOD's fiscal year 2002 performance plan, which has yet to be issued, provides DOD with the opportunity to address these shortfalls. On the basis of last year's analysis of DOD's fiscal year 1999 performance report and fiscal year 2001 performance plan, GAO recommended that the Department include more qualitative and quantitative goals and measures in its annual performance plan and report to gauge progress toward achieving mission outcomes. DOD has not as yet fully implemented this recommendation. GAO continues to believe that the Secretary of Defense should adopt this recommendation as it updates its strategic plan and prepares its next annual performance plan. By doing so, DOD can ensure that its strategies are tied to desired mission outcomes and are well thought-out for resolving ongoing problems, achieving its goals and objectives, and becoming more cost and results oriented.
|
The Bioterrorism cooperative agreement program spans five budget periods and is scheduled to end August 30, 2005. Under this program, CDC has made funds available through cooperative agreements with all 50 states, the District of Columbia, and three of the country’s largest municipalities—New York City, Chicago, and Los Angeles County. CDC has distributed funds to these jurisdictions using a formula under which each jurisdiction receives a base amount of $5 million, plus additional funds based on the jurisdiction’s population. The program’s budget periods typically run from August 31 of one year to August 30 of the next, although the third budget period was extended to run from August 31, 2001, to August 30, 2003. (See table 1 for more information on the budget periods discussed in this report.) Under its cooperative agreement, a jurisdiction is required to obligate funds before the end of the specified budget period and expend funds before the end of the 12 months following that period. However, CDC may give a jurisdiction permission to obligate or expend funds beyond those time frames. CDC’s Procurement and Grants Office (PGO) is responsible for awarding and administering CDC’s grants and cooperative agreements. In this capacity, PGO is responsible for notifying the jurisdictions, through an NCA, of the funds awarded for each budget period. In addition to notifying the jurisdictions, PGO also provides this information to CDC’s Financial Management Office (FMO), which processes CDC’s grant awards and cooperative agreements. FMO works with DPM to place the cooperative agreement funds into the appropriate accounts and to ensure that jurisdictions have access to their Bioterrorism funds through PMS’s accounts. CDC’s Office of Terrorism Preparedness and Emergency Response, which coordinates emergency response and preparedness across CDC, is responsible for the programmatic components of the program and also works with PGO and FMO to provide direct assistance to jurisdictions on request. To monitor the use of the Bioterrorism funds, CDC requires that jurisdictions submit regular progress reports that track their progress toward completing a set of activities. Jurisdictions are also required to submit annual FSRs that provide information on the expenditure and obligation of Bioterrorism funds. In addition, DPM monitors the funds drawn down by jurisdictions from PMS, and jurisdictions must submit quarterly federal cash transaction reports to DPM. Jurisdictions had expended a substantial amount of fiscal year 2002 and 2003 program funds as of August 30, 2004. They had expended over four- fifths of the fiscal year 2002 funds awarded through the HHS P accounts for the program’s third budget period and over half of the fourth budget period funds awarded through the HHS P accounts. As allowed by CDC’s cooperative agreements, jurisdictions continued to expend fiscal year 2002 funds after the end of the third budget period in August 2003 and have continued to expend funds awarded during the fourth budget period since the end of that period. As of August 30, 2004, jurisdictions had expended 85 percent of the fiscal year 2002 funds awarded through the PMS P accounts for the Bioterrorism program’s third budget period. There was considerable variation among jurisdictions’ expenditure rates, with individual jurisdictions’ rates ranging from a high of 100 percent to a low of 27 percent. Ten jurisdictions had expended all fiscal year 2002 funds in the P accounts, and 22 had expended over 90 percent. Three jurisdictions, Delaware, the District of Columbia, and Massachusetts, had expended less than half of their funds. (See fig. 1 for information on the third budget period’s fiscal year 2002 funds expended as of August 30, 2004. App. II provides additional data.) Jurisdictions continued, as authorized, to expend the third budget period’s fiscal year 2002 P account funds over the course of the following budget period to pay for obligations incurred during the third budget period, such as contracts that extended beyond August 2003. Although jurisdictions had expended only 56 percent of the third budget period’s fiscal year 2002 P account funds by the end of that budget period, they had expended 85 percent of the funds as of August 30, 2004, the end of the fourth budget period. No jurisdiction had expended all its fiscal year 2002 P account funds by the end of the third budget period—individual expenditure rates ranged from 4 percent to 87 percent. (See fig. 1 for information on the third budget period’s fiscal year 2002 funds expended from the P accounts as of August 30, 2003. App. II provides additional data.) As of August 30, 2004—the end of the fourth budget period—jurisdictions had expended 53 percent of the fiscal year 2003 bioterrorism funds awarded through the P accounts for that period. As with fiscal year 2002 funds awarded during the third budget period, there is variation in individual jurisdictions’ rates of expenditure, which ranged from 93 percent to zero. While expenditure rates varied, 15 jurisdictions had expended at least two-thirds of the 2003 funds awarded through the P accounts for the fourth budget period. (See fig. 2 for information on the fourth budget period’s funds expended from the P accounts. App. III provides additional data.) While slightly over half of the fourth budget period’s funds in the P accounts had been expended as of August 30, 2004, jurisdictions have continued to expend these funds during the current budget period—August 31, 2004, to August 30, 2005. The pattern of expenditure for budget period four funds was similar to that of budget period three; in both cases, jurisdictions expended just over half their funds during the budget period and continued to expend the funds during the next budget period. At the end of the Bioterrorism program’s third budget period, jurisdictions reported that less than one-sixth of fiscal year 2001 and 2002 funds awarded for that period remained unobligated. Similarly, as of August 1, 2004, jurisdictions estimated that approximately one-fifth of fiscal year 2003 funds awarded for the program’s fourth budget period would remain unobligated as of August 30, 2004, the end of that period. According to the jurisdictions’ annual FSRs and NCAs, as of the end of the third budget period (August 31, 2001, to August 30, 2003), 14 percent of all bioterrorism funds awarded for that period remained unobligated. As with expenditure rates, individual jurisdictions’ rates of unobligated funds varied, ranging from none to over three-fifths of the awarded funds. Seven jurisdictions reported that all their funds from that period had been obligated, and 44 jurisdictions reported that less than one-quarter of their third budget period funds remained unobligated. Two jurisdictions, the District of Columbia and Massachusetts, reported the highest levels of unobligated third budget period funds—62 percent and 51 percent, respectively. (See fig. 3 for more information on jurisdiction-reported unobligated Bioterrorism funds. App. IV provides additional data.) According to jurisdiction estimates as of August 1, 2004, approximately 20 percent of all Bioterrorism funds awarded for the program’s fourth budget period (August 31, 2003, to August 30, 2004) would remain unobligated as of August 30, 2004. Jurisdictions’ individual estimated unobligated balances varied greatly, ranging from none to almost three- quarters of the awarded funds. Five jurisdictions estimated that all their fourth budget period’s funds would be obligated by the end of the period, and 31 jurisdictions estimated that less than one-quarter of their fourth budget period’s funds would remain unobligated. Three jurisdictions, Chicago, New Mexico, and Delaware, estimated that over half of the Bioterrorism funds awarded to them for the fourth budget period would remain unobligated as of August 30, 2004. (See fig. 3 for more information on jurisdiction-reported unobligated Bioterrorism funds. App. IV provides additional data.) Many jurisdictions faced challenges, partly related to state and local administrative processes, that slowed the pace of their obligation and expenditure of bioterrorism funds. Reported challenges included workforce issues, contracting and procurement processes to ensure the prudent use of public funds, and problems stemming from lengthy information technology upgrades. Some jurisdictions have developed ways to streamline these administrative processes, facilitating the obligation and expenditure of funds. State and municipal officials told us that the obligation and expenditure of funds were delayed during the Bioterrorism program’s third and fourth budget periods for a variety of reasons, including issues related to the workforce, contracting and procurement, and information technology upgrades. Officials in 16 of 19 jurisdictions we contacted cited workforce issues related to recruitment and retention and complex staffing processes as challenges to timely obligation and expenditure of bioterrorism funds. According to the Association of State and Territorial Health Officials, 75 to 80 percent of bioterrorism funds have been used for personnel expenditures. Seven jurisdiction officials we contacted reported difficulties in recruiting staff, and some officials reported staff retention problems. As we previously reported, such barriers included noncompetitive salaries and a general shortage of people with the necessary skills. Officials told us they had difficulty finding qualified workers, particularly epidemiologists and laboratory technicians, and two officials indicated that problems related to recruiting have delayed the expenditure of funds. In one of those jurisdictions, the public health laboratory had so many vacancies that there were not enough staff to fully implement a new bioterrorism and emergency preparedness initiative. Officials indicated that, within their jurisdictions, skilled workers could find better-paying positions with other organizations. In one case, a municipality had to persuade a job candidate to take a significant pay cut to work on the program. In another instance, the salaries offered by a federal agency within a state were about 25 percent higher than those offered by the state. The same state reported that competition from the private sector and other agencies has resulted not only in a shortage of qualified applicants for positions, but also in the loss of highly qualified personnel who had gained extensive experience and expertise working for the state. Hiring freezes and complex staffing processes were also cited as delaying the obligation and expenditure of funds. According to jurisdiction officials with whom we spoke, as well as officials from the National Association of County and City Health Officials, program officials in some jurisdictions were not permitted to hire staff during an across-the-board freeze, regardless of the federal funding available. Moreover, jurisdiction officials reported that in some cases the release of a hiring freeze inundated the hiring process, lengthening it in one state to as long as 10 months. Jurisdiction officials stated that other staffing constraints also hindered their hiring process. One state mandated mass layoffs in December 2002, which resulted in the loss of approximately 60 health agency employees, including the entire unit that was handling bioterrorism contracts. This was followed in early 2003 by an early retirement plan that resulted in the loss of support staff for the cooperative agreement. The layoffs and early retirement program delayed bioterrorism contract payments. Moreover, employees who had been laid off had contractual rights to placement in new positions, which resulted in the placement into bioterrorism program positions of some employees with little or no background in public health. Some fiscal support positions remained unfilled for several months as a result of the layoffs and early retirement program, which in turn affected the state’s ability to process bioterrorism program payments. Because expenditures related to contracting for services and procuring equipment can occur after the end of a given budget period, program officials stressed the importance of being able to expend obligated funds up to 12 months beyond the budget period, as CDC allows for in this program. To illustrate the importance of such an allowance, one official gave the example of a contract for $100,000 that began in June 2003, during the third budget period. Under the terms of the contract, the contractor would bill the program quarterly. The state in question would draw down funds for the contract from PMS on a quarterly basis. If the program received the first bill of $25,000 in September 2003, the first drawdown related to this contract would occur in December 2003 and subsequent drawdowns would occur in March 2004, June 2004, and September 2004, all within the next budget period. Jurisdiction officials provided a number of examples of the complexity of their jurisdictions’ contracting processes and the resulting effect on obligations and expenditures. An official in one state reported that the state had to negotiate and develop contracts with over 100 local health agencies after it received an influx of funding during the Bioterrorism program’s third budget period. After the local health contracts were developed, they needed approval by the municipalities or health district boards, a process that in some cases took several months. Another state indicated that its contracting process takes a minimum of 2 months. Yet another stated that the process could take from 3 to 6 months, depending on which complexities arise. In addition, officials reported that the request for proposals (RFP) process and bidding requirements delayed their ability to create contracts and orders for services and equipment. In one state, the RFP process takes 4 to 7 months, while in another the process can take as long as 9 months. The necessity of developing large infrastructure projects related to the bioterrorism preparedness cooperative agreement has also had an effect on obligations and expenditures in a number of jurisdictions. These projects, such as setting up a syndromic surveillance system, require the assistance and expertise of a limited number of national contractors. A state official informed us that since many of the jurisdictions began these projects at the same time—after the influx of fiscal year 2002 funds during the third budget period—some jurisdictions have had to wait for these contractors to become available. Therefore, some jurisdictions have had to wait to receive services and equipment, in effect delaying both obligations and expenditures. In addition, jurisdictions indicated that effective planning or the development of RFPs for these large projects required extended periods of time. One official told us the state health department went through a careful planning process in order to ensure the proper use of funds, consequently delaying the obligation and expenditure of funds. Several jurisdictions reported that their efforts to upgrade their information technology—a focus area of the Bioterrorism program— delayed program expenditures. Officials in four jurisdictions noted that it took time to plan and implement improvements in information technology systems and equipment. For example, in one state, the installation of each piece of equipment, including new computer systems and videoconferencing equipment, required a site survey by the state’s Department of General Services to assess the feasibility of the proposed location to house the equipment. These site surveys could take anywhere from 2 to 12 months to complete. In another state, an official reported that funds were designated to support the state’s Internet connectivity to provide local public health agencies and their public health partners with continuous, high-speed Internet access. Because significant areas of the state did not have access to high- speed Internet services, the state conducted engineering studies, which delayed distribution of funds to local public health agencies. While officials described challenges to quickly obligating and expending bioterrorism funds, some also described techniques they had developed to address workforce and procurement issues. Officials in three jurisdictions indicated that being exempted from hiring freezes expedited the obligation of funds. In one case the jurisdiction exempted bioterrorism positions from hiring freezes and also gave these positions the highest priority for hiring. According to another state official, many of the program staff were hired as contractual or “at will” employees, to bypass the state’s lengthy hiring process. Another state, which was reluctant to hire permanent full- time program staff because of concern about the sustainability of federal funding, employed temporary staff instead. Some officials also described techniques they had developed to address challenges related to procurement issues. Prior to receiving fiscal year 2002 funds during the program’s third budget period, one jurisdiction’s program elected to use a nonprofit fiscal and administrative intermediary to reduce the delays caused by the municipality’s regulations. A program official told us that using the intermediary also allowed the program to expedite the routine processes of recruitment, contracting, purchasing, and ensuring fiscal accountability. According to the official, the intermediary has a long history of collaborating with that health agency to quickly and successfully implement new initiatives and is experienced in grant management. The official stated that the intermediary has reduced the time that it takes to implement program procedures because it does not have to follow the municipality’s normal requirements. For instance, unlike the health agency, the intermediary is not subject to certain municipal contracting and procurement requirements. Consequently, it is able to use statewide general services contracts that can have as little as a 2-day turnaround. In addition, the intermediary has reduced the municipality’s RFP process from the usual 6 months to 2 months. One state official indicated that the state health agency had made a concerted effort to streamline its procurement process. Prior to this effort, the procurement process had taken as long as 18 months, including time for the development and distribution of an RFP and for appeals. The official said that one of the major improvements involved compiling a list of preapproved contractors, which enables bioterrorism program officials to purchase directly from those contractors without going through the time-consuming RFP process. Another official told us that the health agency staff can place orders and contracts more rapidly than usual if they designate them as “sole source” and “single source” procurement, meaning that the needed equipment or service is available from only one vendor. The official indicated that the state’s bioterrorism program uses this designation whenever they can demonstrate that only one vendor can provide the equipment or service. Additionally, the state has “master price” agreements with some vendors for certain goods and services that are commonly needed by the various agencies in the state. The official said that staff can quickly place orders for goods and services that fall under the master price agreement and receive these items in 2 to 4 days. After the terrorist events of 2001, HHS’s funding to help jurisdictions prepare for and defend against a possible bioterrorism attack greatly increased. In 2004, HHS expressed concern that jurisdictions had not moved quickly enough to use these funds. However, jurisdictions expended and obligated a substantial amount of program funds as of August 30, 2004. In assessing the pace at which jurisdictions are spending these funds, it is useful to consider that prudent use of public funds— particularly for new programs—requires careful and often time-consuming planning. Once plans have been developed, obligating and expending the funds to implement them takes additional time. It is also important to recognize that because some expenditures, such as those for contracts, take place over a period of time rather than as one lump sum early in the budget period, it may take longer than the program’s budget period to expend these funds. Furthermore, jurisdictions face additional challenges to quickly obligating and expending funds, partly related to various administrative processes, although some jurisdictions have found ways to streamline certain processes. We provided a draft of this report to HHS for comment, and the agency informed us it had no comments on the draft report. However, HHS provided technical comments, which we incorporated into the report as appropriate. As we arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretary of Health and Human Services, the Director of the Centers for Disease Control and Prevention, appropriate congressional committees, and other interested parties. We will also make copies available to others who are interested upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call Marjorie Kanof at (202) 512-7114. Major contributors to this report are listed in appendix V. For the Department of Health and Human Services (HHS) Centers for Disease Control and Prevention’s (CDC) Public Health Preparedness and Response for Bioterrorism program cooperative agreement, we provide information on the extent to which jurisdictions had expended fiscal year 2002 funds awarded for the third budget period as of August 30, 2003, and August 30, 2004, and had expended fiscal year 2003 funds awarded for the fourth budget period as of August 30, 2004. We also provide information on the extent to which fiscal year 2001, 2002, and 2003 Bioterrorism funds awarded for the program’s third and fourth budget periods were obligated, and challenges jurisdictions have faced when attempting to expend or obligate the Bioterrorism funds. To provide information on the expenditure and obligation of Bioterrorism program funds awarded to jurisdictions, we analyzed documents and interviewed officials from HHS’s Office of the Secretary, CDC, Division of Payment Management (DPM), and Office of the Inspector General (OIG). In addition, we reviewed documents and interviewed officials from the Association of State and Territorial Health Officials and the National Association of County and City Health Officials, two national associations representing state and local health officials. We also interviewed jurisdiction audit and Bioterrorism program officials to obtain information on program obligations and to determine challenges faced by jurisdictions in expending and obligating funds, and reviewed documents from the Congressional Research Service, the Association of Public Health Laboratories, and other organizations. To determine expenditures as of August 30, 2003, and August 30, 2004, we analyzed expenditure data from DPM’s Payment Management System (PMS). We obtained and reviewed data from both the public assistance (P) accounts and the general (G) accounts. Funds accounted for in the P accounts are specific to certain grants or agreements, while G accounts merge funds from grants and agreements made to one grantee into one overall account. While over 90 percent of funds awarded to jurisdictions through the Bioterrorism program are tracked in P accounts, some funds are tracked in the G accounts, including all unexpended funds from budget periods prior to the fiscal year 2002 funds awarded during the third budget period and funds related to the Strategic National Stockpile before and after fiscal year 2002. Because expenditures from the G accounts related to a specific grant or agreement cannot be linked to funds from specific budget periods, we are not able to describe the rates of expenditure of Bioterrorism funds tracked in the G accounts. Moreover, because all funds awarded to jurisdictions prior to the fiscal year 2002 funds are tracked in the G accounts, we are not able to account for all expenditures during the program’s third budget period—August 31, 2001, to August 30, 2003. Rather, we are able to track expenditures for only the second portion of that budget period, starting with the fiscal year 2002 emergency supplemental appropriation. Expenditure data provided in this report were obtained from PMS’s P accounts and include only funds awarded as financial assistance beginning with the fiscal year 2002 emergency supplemental appropriation. (See table 2 for information on funding included in the data sources reviewed.) The OIG annually contracts for an audit that provides reasonable assurance about the design of controls included in DPM’s PMS, including controls for recording award authorizations, processing awardee requests for funds, and reporting payment and recipient disbursement information to the awarding agency. We did not conduct a review to determine the appropriateness of any jurisdiction expenditure. To determine obligation data, we reviewed the financial status reports (FSR) jurisdictions were required to submit to CDC at the end of the third budget period (August 31, 2001, to August 30, 2003) and the estimated FSRs for the fourth budget period (August 31, 2003, to August 30, 2004) that jurisdictions were requested to submit by August 1, 2004. Unlike PMS’s P account data, the FSRs include information on all Bioterrorism funds awarded as financial assistance, including both fiscal year 2001 and 2002 funds awarded during the third budget period, funds carried over from prior periods, and funds related to the Strategic National Stockpile. Along with FSRs for the entire third budget period, CDC asked jurisdictions to submit FSRs reflecting only the fiscal year 2002 emergency supplemental appropriation funds. However, because few jurisdictions submitted such emergency supplemental FSRs, we were unable to use these FSRs. Because of this, we are reporting on obligation data obtained from the FSRs for the entire third budget period, encompassing both fiscal year 2001 and 2002 funds; this is a different period from that used for the expenditure data provided in this report, which describe only expenditures of the third budget period’s fiscal year 2002 funds. Final FSRs for the fourth budget period were not available in sufficient time to be used in our work. For this period, we used the estimated FSRs jurisdictions were asked to submit prior to the end of the budget period. While the unobligated amounts reported on the final FSR may vary from the estimates, CDC determined that these estimates were sufficiently accurate to use for planning purposes. Seven jurisdictions did not submit an estimated FSR, but did provide information in their application for budget period five funds on estimated unobligated balances as of August 30, 2004. Eight jurisdictions did not provide any information on estimated unobligated balances and were excluded from our analysis. In addition to reviewing jurisdictions’ FSRs, we also reviewed the Notices of Cooperative Agreement (NCA), which are provided by CDC to jurisdictions and provide information on total Bioterrorism program funds awarded to them for a budget period. Unlike the PMS P account data, the NCAs include information on all funds awarded for the program’s entire third budget period, including funds from both fiscal years 2001 and 2002, and information on funds carried over from previous periods, funds related to the Strategic National Stockpile, and funds awarded as direct assistance. To determine obligation rates, we compared the information on total funds awarded obtained from the NCAs and FSRs to obligation data that jurisdictions reported in the third budget period FSRs and estimated in the fourth budget period FSRs. We interviewed CDC staff to resolve any inconsistencies between the information provided on the FSRs and information provided in the NCAs and modified data as appropriate. In addition, because the data related to obligation are self-reported by jurisdictions to CDC, we interviewed officials at CDC and HHS’s OIG to obtain information on any work done to determine the reliability of these data. We also contacted the jurisdiction audit agencies in all the jurisdictions by e-mail or telephone to determine whether they had performed any work to determine the reliability of the obligation data. Data for the third budget period from 18 jurisdictions and for the fourth budget period from 1 jurisdiction can be considered reliable based on the work of OIG and jurisdiction audit agencies. However, in many cases, insufficient work had been done to assess the reliability of the obligation data reported by jurisdictions. In these cases, the information presented is as reported by the jurisdictions, and we cannot attest to its reliability. In addition, we did not conduct a review to determine the appropriateness of any obligations reported by jurisdictions to CDC. To describe factors that jurisdictions say contributed to delays in obligating and expending funds and actions some jurisdictions took to address those factors, we contacted selected jurisdictions via e-mail in two phases. Initially, the team contacted 10 jurisdictions to gather information on why they may have had unobligated Bioterrorism funds. We analyzed the obligation and expenditure data to identify jurisdictions with high and low rates of unobligated and unexpended Bioterrorism funds, for both the third and fourth budget periods. Jurisdictions were categorized as those with (1) reported high unobligated balances, (2) reported low unobligated balances, or (3) reported low unobligated balances and high levels of unexpended funds. We then selected jurisdictions from each of the groups, taking into account diversity in geographic location, population size, urban and rural status, and their expenditure and obligation patterns. We e-mailed each jurisdiction, and we followed up by telephone to obtain any necessary clarification on responses. For phase 2, we e-mailed 3 jurisdictions from the phase 1 group and 9 additional jurisdictions. These 12 jurisdictions had expended from 50 to 87 percent of their third budget period funds by August 30, 2003, the end of that period, but had expended 100 percent of those funds by August 30, 2004. We followed up by telephone and e-mail to obtain any necessary clarification on responses. Data on funds awarded to California, Illinois, and New York do not include funds awarded to Los Angeles County, Chicago, or New York City. Percentage of awarded funds reported Data on awarded funds were not consistent between jurisdiction FSRs and the Notices of Cooperative Agreement. Based on information provided by officials at the Centers for Disease Control and Prevention, we determined the correct award amounts. In no case did the difference account for more than 4 percent of the total awarded funds. No data fiscal year 2003 data were available for these jurisdictions. These jurisdictions did not submit estimated FSRs for the fourth budget period, but did provide information in their application on estimated unobligated balances as of August 30, 2004. Data on funds awarded to California, Illinois, and New York do not include funds awarded to Los Angeles County, Chicago, and New York City. Data for the third budget period have been determined to be reliable. Data for the third and fourth budget period have been determined to be reliable. Data on funds awarded to Massachusetts were not consistent between the jurisdiction FSR and the Notice of Cooperative Agreement. The jurisdiction did not account for $6,682,740 carried over from prior budget periods in its fourth budget period estimated FSR. Data comparable to other jurisdictions were not available. In addition to the person named above, key contributors to this report were Helene Toiv, Emily Gamble Gardiner, Lucia P. Fort, Roseanne Price, and Jessica Cobert. HHS Bioterrorism Preparedness Programs: States Reported Progress but Fell Short of Program Goals for 2002. GAO-04-350R. Washington, D.C.: February 10, 2004. Bioterrorism: Public Health Response to Anthrax Incidents of 2001. GAO-04-152. Washington, D.C.: October 15, 2003. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington, D.C.: September 30, 2004. Emergency Preparedness: Federal Funds for First Responders. GAO-04- 788T. Washington, D.C.: May 13, 2004. Bioterrorism: Preparedness Varied across State and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
|
In 1999, the Department of Health and Human Services' (HHS) Centers for Disease Control and Prevention (CDC) began funding jurisdictions' efforts to prepare for bioterrorism attacks through the Public Health Preparedness and Response for Bioterrorism program. After the events of September 11, 2001, and the 2001 anthrax incidents, program funds increased almost twentyfold. Citing jurisdictions' unexpended program funds, HHS reallocated some fiscal year 2004 funds to support other local and national bioterrorism initiatives. Jurisdictions and associations representing jurisdictions disputed HHS's assertion that large amounts of funds remain unused, noting that HHS did not acknowledge obligated funds that had not yet been expended. GAO was asked to provide information on (1) the extent to which jurisdictions had expended the fiscal year 2002 funds awarded for the program's third budget period as of August 30, 2003, and August 31, 2004, and the fiscal year 2003 funds awarded for the program's fourth budget period, as of August 30, 2004; (2) the extent to which fiscal year 2001, 2002, and 2003 funds awarded for the third and fourth budget periods remained unobligated as of August 30, 2004; and (3) factors jurisdictions identified as contributing to delays in expending and obligating funds and actions some jurisdictions took to address them. Jurisdictions have expended a substantial amount of Bioterrorism program funds. As of August 30, 2004, jurisdictions had expended over four-fifths of the fiscal year 2002 funds awarded during the third budget period through the HHS P accounts--the public assistance accounts that track over 90 percent of all funds awarded. As of that date, they had expended slightly over half of P account funds awarded for the program's fourth budget period. Jurisdictions continued, as authorized, to expend funds beyond the budget period for which they were awarded. For example, some expenditures, such as contract payments, extend beyond one budget period. A t the end of the program's third budget period, jurisdictions reported that less than one-sixth of all bioterrorism funds awarded for that period--including both fiscal year 2001 and 2002 funds--remained unobligated, and some jurisdictions reported that none of their funds remained unobligated. As of August 1, 2004, jurisdictions estimated that less than one-quarter of all funds awarded for the fourth budget period would remain unobligated as of August 30, 2004, and five jurisdictions estimated that they would have no funds remaining unobligated. Many jurisdictions reported facing challenges, partly related to administrative processes, that delayed their obligation and expenditure of bioterrorism funds. These included workforce issues such as hiring freezes; contracting and procurement processes to ensure responsible use of public funds; and lengthy information technology upgrades. Some jurisdictions have simplified these processes to expedite the obligation and expenditure of funds. We provided a draft of this report to HHS for comment, and the agency informed us it had no comments on the draft report.
|
Increasing computer interconnectivity—most notably growth in the use of the Internet—has revolutionized the way that our government, our nation, and much of the world communicate and conduct business. While the benefits have been enormous, they are accompanied by significant risks to the nation’s computer systems and to the critical operations and infrastructures that those systems support. Different types of cyber threats from numerous sources may adversely affect computers, software, a network, an agency’s operations, an industry, or the Internet itself. Cyber threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks. A targeted attack occurs when a group or individual specifically attacks a cyber asset. An untargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. There is increasing concern among both government officials and industry experts regarding the potential for a cyber attack on the national critical infrastructure, including the infrastructure’s control systems. The Department of Defense (DOD) and the Federal Bureau of Investigation, among others, have identified multiple sources of threats to our nation’s critical infrastructure, including foreign nation states engaged in information warfare, domestic criminals, hackers, virus writers, and disgruntled employees working within an organization. In addition, there is concern about the growing vulnerabilities to our nation as the design, manufacture, and service of information technology have moved overseas. For example, according to media reports, technology has been shipped to the United States from foreign countries with viruses on the storage devices. Further, U.S. authorities are concerned about the prospect of combined physical and cyber attacks, which could have devastating consequences. For example, a cyber attack could disable a security system in order to facilitate a physical attack. Table 2 lists sources of threats that have been identified by the U.S. intelligence community and others. The nation’s critical infrastructure operates in an environment of increasing and dynamic threats, and adversaries are becoming more agile and sophisticated. Terrorists, transnational criminals, and intelligence services use various cyber tools that can deny access, degrade the integrity of, intercept, or destroy data and jeopardize the security of the nation’s critical infrastructure (see table 3). The growing number of known vulnerabilities increases the potential number of attacks. By exploiting software vulnerabilities, hackers and others who spread malicious code can cause significant damage, ranging from defacing Web sites to taking control of entire systems and thereby being able to read, modify, or delete sensitive information; disrupt operations; launch attacks against other organizations’ systems; or destroy systems. Reports of attacks involving critical infrastructure demonstrate that a serious attack could be devastating, as the following examples illustrate. In June 2003, the U.S. government issued a warning concerning a virus that specifically targeted financial institutions. Experts said the BugBear.b virus was programmed to determine whether a victim had used an e-mail address for any of the roughly 1,300 financial institutions listed in the virus’s code. If a match was found, the software attempted to collect and document user input by logging keystrokes and then provide this information to a hacker, who could use it in attempts to break into the banks’ networks. In August 2006, two Los Angeles city employees hacked into computers controlling the city’s traffic lights and disrupted signal lights at four intersections, causing substantial backups and delays. The attacks were launched prior to an anticipated labor protest by the employees. In October 2006, a foreign hacker penetrated security at a water filtering plant in Harrisburg, Pennsylvania. The intruder planted malicious software that was capable of affecting the plant’s water treatment operations. In May 2007, Estonia was the reported target of a denial-of-service cyber attack with national consequences. The coordinated attack created mass outages of its government and commercial Web sites. In March 2008, the Department of Defense reported that in 2007 computer networks operated by Defense, other federal agencies, and defense-related think tanks and contractors were targets of cyber warfare intrusion techniques. Although those responsible were not definitively substantiated, the attacks appeared to have originated in China. As these examples illustrate, attacks resulting in the incapacitation or destruction of the nation’s critical infrastructures could have a debilitating impact on national and economic security and on public health and safety. To protect the nation’s critical computer-dependent infrastructures against cyber threats and attacks, federal law and policy have identified the need to enhance cybersecurity and establish cyber analytical and warning capabilities, which are sometimes referred to as “indications and warnings.” The laws and policies include (1) the Homeland Security Act of 2002, (2) the National Strategy to Secure Cyberspace, (3) Homeland Security Presidential Directive 7, and (4) the National Response Framework. In addition, the President issued in January 2008 Homeland Security Presidential Directive 23, which, according to US-CERT officials, has provisions that affect cyber analysis and warning efforts of the federal government. The Homeland Security Act of 2002 established the Department of Homeland Security and gave it lead responsibility for preventing terrorist attacks in the United States, reducing the vulnerability of the United States to terrorist attacks, and minimizing the damage and assisting in recovery from attacks that do occur. The act assigned the department, among other things, a number of critical infrastructure protection responsibilities, including gathering of threat information, including cyber-related, from law enforcement, intelligence sources, and other agencies of the federal, state, and local governments and private sector entities to identify, assess, and understand threats; carrying out assessments of the vulnerabilities of key resources to determine the risks posed by attacks; and integrating information, analyses, and vulnerability assessments in order to identify priorities for protection. In addition, the department is responsible for disseminating, as appropriate, information that it analyzes—both within the department and to other federal, state, and local government agencies and private sector entities—to assist in the deterrence, prevention, preemption of, or response to terrorist acts. The National Strategy to Secure Cyberspace proposes that a public/private architecture be provided for analyzing, warning, and managing incidents of national significance. The strategy states that cyber analysis includes both (1) tactical analytical support during a cyber incident and (2) strategic analyses of threats. Tactical support involves providing current information on specific factors associated with incidents under investigation or specific identified vulnerabilities. Examples of tactical support include analysis of (1) a computer virus delivery mechanism to issue immediate guidance on ways to prevent or mitigate damage related to an imminent threat or (2) a specific computer intrusion or set of intrusions to determine the perpetrator, motive, and method of attack. Strategic analysis is predictive in that it looks beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential future threat of national importance. For example, strategic analyses may identify long-term vulnerability and threat trends that provide advance warnings of increased risk, such as emerging attack methods. Strategic analyses are intended to provide policymakers with information that they can use to anticipate and prepare for attacks, thereby diminishing the damage from such attacks. Homeland Security Presidential Directive 7 (HSPD 7) directs DHS to, among other things, serve as the focal point for securing cyberspace. This includes analysis, warning, information sharing, vulnerability reduction, mitigation, and recovery efforts for critical infrastructure information systems. It also directs DHS to develop a national indications and warnings architecture for infrastructure protection and capabilities, including cyber, that will facilitate an understanding of baseline infrastructure operations, the identification of indicators and precursors to an attack, and create a surge capacity for detecting and analyzing patterns of potential attacks. In May 2005, we reported that DHS has many cybersecurity-related roles and responsibilities, including developing and enhancing national cyber analysis and warning capabilities. However, we found that DHS had not fully addressed all its cybersecurity-related responsibilities and that it faced challenges that impeded its ability to fulfill its responsibilities. These challenges included having organizational stability and authority, hiring employees, establishing information sharing and effective partnerships, and developing strategic analysis and warning. We made recommendations to the Secretary of Homeland Security to engage appropriate stakeholders to prioritize key cybersecurity responsibilities, develop a prioritized list of key activities to addressing underlying challenges, and identify performance measures and milestones for fulfilling its responsibilities and for addressing its challenges. We did not make new recommendations regarding cyber-related analysis and warning because our previous recommendations had not been fully implemented. Specifically, in 2001, we recommended that responsible executive branch officials and agencies establish a capability for strategic analysis of computer-based threats, including developing a methodology, acquiring expertise, and obtaining infrastructure data. The National Response Framework, issued by DHS in January 2008, provides guidance to coordinate cyber incident response among federal entities and, upon request, state and local governments and private sector entities. Specifically, the Cyber Incident Annex describes the framework for federal cyber incident response in the event of a cyber-related incident of national significance affecting the critical national processes. Further, the annex formalizes the National Cyber Response Coordination Group (NCRCG). As established under the preceding National Response Plan, the NCRCG continues to be cochaired by DHS’s National Cyber Security Division (NCSD), the Department of Justice’s Computer Crime and Intellectual Property Section, and the DOD. It is to bring together officials from all agencies that have responsibility for cybersecurity and the sector- specific agencies identified in HSPD 7. The group coordinates intergovernmental and public/private preparedness and response to and recovery from national-level cyber incidents and physical attacks that have significant cyber-related consequences. During and in anticipation of such an incident, the NCRCG’s senior-level membership is responsible for providing subject matter expertise, recommendations, and strategic policy support and ensuring that the full range of federal capabilities is deployed in a coordinated and effective fashion. In January 2008, the President issued HSPD 23—also referred to as National Security Presidential Directive 54 and the President’s “Cyber Initiative”—to improve the federal government’s cybersecurity efforts, including protecting against intrusion attempts and better anticipating future threats. While the directive is a classified document, US-CERT officials stated that it includes steps to enhance cyber analysis related efforts, such as requirements that federal agencies implement a centralized monitoring tool and that the federal government reduce the number of connections to the Internet, referred to as Trusted Internet Connections. To help protect the nation’s information infrastructure, DHS established the US-CERT. It is currently positioned within the NCSD of DHS’s Office of Cybersecurity and Communications. Figure 1 shows the position of these offices within DHS’s organizational structure. US-CERT is to serve as a focal point for the government’s interaction with federal and nonfederal entities on a 24-hour-a-day, 7-day-a-week basis regarding cyber-related analysis, warning, information sharing, major incident response, and national-level recovery efforts. It is charged with aggregating and disseminating cybersecurity information to improve warning of and response to incidents, increasing coordination of response information, reducing vulnerabilities, and enhancing prevention and protection. In addition, the organization is to collect incident reports from all federal agencies and assist agencies in their incident response efforts. It is also to accept incident reports when voluntarily submitted by other public and private entities and assist them in their response efforts, as requested. US-CERT is composed of five branches, as shown in figure 2: Operations, Situational Awareness, Law Enforcement and Intelligence, Future Operations, and Mission Support. Each branch has specific responsibilities The Operations branch is to receive and respond to incidents, disseminate reasoned and actionable cybersecurity information, and analyze various types of data to improve overall understanding of current or emerging cyber threats affecting the nation’s critical infrastructure. The Situational Awareness branch is to identify, analyze, and comprehend broad network activity and to support incident handling and analysis of cybersecurity trends for federal agencies so that they may increase their own situational awareness and reduce cyber threats and vulnerabilities. As part of its responsibilities, the branch is responsible for managing the information garnered from the US-CERT Einstein program, which obtains network flow data from federal agencies, and analyzing the traffic patterns and behavior. This information is then combined with other relevant data to (1) detect potential deviations and identify how Internet activities are likely to affect federal agencies and (2) provide insight into the health of the Internet and into suspicious activities. The Law Enforcement and Intelligence branch is to facilitate information sharing and collaboration among law enforcement agencies, the intelligence community, and US-CERT through the presence of liaisons from those organizations at US-CERT. The Future Operations branch was established in January 2007 to lead or participate in the development of related policies, protocols, procedures, and plans to support US-CERT’s coordination of national response to cyber incidents. The Mission Support branch is to manage US-CERT’s communications mechanisms, including reports, alerts, notices, and its public and classified Web site content. Our research and observations at federal and nonfederal entities show that cyber analysis and warning typically encompasses four key capabilities: Monitoring—detecting cyber threats, attacks, and vulnerabilities and establishing a baseline of system and communication network assets and normal traffic. Analysis—using the information or intelligence gathered from monitoring to hypothesize about what the threat might be, investigate it with technical and contextual expertise and identify the threat and its impact, and determine possible mitigation steps. Analysis may be initiated in reaction to a detected anomaly. This is a tactical approach intended to triage information during a cyber incident and help make decisions. It may also be predictive, proactively reviewing data collected during monitoring to look at cyber events and the network environment to find trends, patterns, or anomaly correlations that indicate more serious attacks or future threats. Warning—developing and issuing informal and formal notifications that alert recipients in advance of potential or imminent, as well as ongoing, cyber threats or attacks. Warnings are intended to alert entities to the presence of cyber attack, help delineate the relevance and immediacy of cyber attacks, provide information on how to remediate vulnerabilities and mitigate incidents, or make overall statements about the health and welfare of the Internet. Response—taking actions to contain an incident, manage the protection of network operations, and recover from damages when vulnerabilities are revealed or when cyber incidents occur. In addition, response includes lessons learned and cyber threat data being documented and integrated back into the capabilities to improve overall cyber analysis and warning. Through our consultations with experts, we found that the terminology may vary, but the functions of these capabilities are fairly consistent across cyber analysis and warning entities. Figure 3 depicts the basic process of cyber analysis and warning capabilities. Typically, cyber analysis and warning is executed, or managed, from a central focal point known as an operation center or watch center. Such centers can serve a single organization or a number of organizations. Centers generally include physically and electronically connected multidisciplinary teams with access to a variety of communication and software tools. The teams are made up of specialized analysts, sometimes referred to as watch standers, with a combination of expertise in information security, intelligence, and cyber forensics. Teams may also include subject area experts with specialized expertise in certain critical infrastructure sectors, industries, or technologies. The centers operate tools that integrate data and facilitate analysis by the watch standers. The data come from a multitude of sources, including internal or external monitoring, human or signals intelligence, analytical results, warnings from other entities, and information collected from previous threat responses. Centers decide when and how to issue formal and informal warnings that contribute to further analysis or provide information that aids in decisions about how to respond to an incident. Depending on the size and organizational structure of an organization, the analysis and warning team may work with incident response teams during a cyber incident. The incident response team manages the decisions required for handling an incident using information discovered during monitoring, analysis, and warning. The team may also coordinate with those responsible for information security for the organization in order to assess risks, remediate vulnerabilities, and prepare for and respond to attacks. Our research and past experience at federal and nonfederal entities identified 15 key attributes associated with the cyber analysis and warning capabilities of monitoring, analysis, warning, and response. These attributes are displayed in table 4, which is followed by a detailed description by capability of each attribute. Monitoring provides the data used to understand one’s operating environment and detect changes that indicate the presence of anomalies that may be cyber attacks. It encompasses five key attributes: 1. Establishing a baseline understanding of network assets and normal network traffic volume and flow In order to detect unusual activity in network traffic or changes in an operating environment, organizations require knowledge of ordinary traffic and environmental conditions. This knowledge forms the baseline against which changes or anomalies can be detected, identified, and mitigated. A baseline is established through activities such as creating an accurate inventory of systems, prioritizing resources and assets, maintaining an understanding of the expected volume and nature of network traffic, and instituting operational procedures such as procedures for handling incidents. Without a baseline, it may be difficult to effectively detect threats or respond to a warning with the appropriate resources. 2. Assessing risks to network assets Assessments should be conducted to determine what risks are posed by combinations of threats and vulnerabilities and inform the monitoring capability so that it is focused on the most critical assets. According to CERT® Coordination Center (CERT/CC) officials, having a baseline knowledge of networks and systems and their associated risks in advance helps individual organizations understand what threats they may be susceptible to, what resources are at risk, and what the potential damage of an attack might be. Risks should be prioritized and mitigated until a reasonable acceptable level of risk is reached. 3. Obtain internal information on network operations via technical tools and user reports Another key attribute is monitoring traffic on internal networks using (1) network and information security-related technology tools and (2) reports on network activity. As table 5 shows, various technologies can be used for internal network monitoring to help compile and identify patterns in network data. Each type of technology may detect anomalies that the other types of software cannot. These technologies can be used to examine data logs from networks on a 24-hour-a-day, 7-day-a-week schedule in an effort to identify (1) precursors and indicators of cyber threats or other anomalies and (2) the occurrence of known attacks. The data logged from these technologies are typically prepared using automated tools to help analysts observe or detect a single anomaly or to discover patterns in data over time. According to several federal and nonfederal entities, hands-on monitoring by trained analysts is essential because it can be difficult for automated tools to identify anomalies and incidents. For example, some automated signature-based tools focus on known threats and may not automatically recognize or alert analysts to new attack patterns or new threat delivery techniques. Other intrusion detection systems can produce large numbers of alerts indicating a problem when one does not exist (false positives); therefore, an analyst must look into anomalies more closely to see if detected intrusions are indications of a threat or simply an equipment malfunction. 4. Obtaining external information on threats, vulnerabilities, and incidents through various relationships, alerts, and other sources External monitoring includes observing and receiving information that is either publicly or not publicly available for the purpose of maintaining environmental or situational awareness, detecting anomalies, and providing data for analysis, warning, and response. External sources of information include formal relationships, such as with and between critical infrastructure sector-related information sharing and analysis centers (ISAC); federal agencies, including military, civilian, law enforcement, and intelligence agencies; international computer emergency response team organizations; the CERT/CC and vendors under contract for services; informal relationships established on a personal basis between analysts located at different operations centers; alerts issued by federal, state, and local governments; alerts issued by commercial external sources such as network security and vulnerability databases, standards, and frameworks such as the National Vulnerability Database, the Common Vulnerability and Exposures List, Common Vulnerability Scoring System, and the Open Vulnerability Assessment Language; media outlets, such as television news and newspapers; and Web sites, such as law enforcement entities’ sites, known hacker and criminal sites and chat rooms, and cooperative cyber analysis and warning services. 5. Detecting anomalous activities Continuous monitoring occurs in order to detect significant changes from the baseline operations or the occurrence of an attack through an already known threat or vulnerability. It is ultimately the detection of an anomaly—observed internally or received from external information—and the recognition of its relevance that triggers analysis of the incident to begin. Analysis uses technical methods in combination with contextual expertise to hypothesize about the threat and associated risks concerning an anomaly and, if necessary, determine mitigation solutions. It encompasses four key attributes: 1. Verifying that an anomaly is an incident Once an anomaly is detected, it should be verified whether it is a genuine cyber incident by determining that the data are from a trusted source and are accurate. For example, if the anomaly was identified by an internal sensor, analysts start by confirming that the sensor was working correctly and not indicating a false positive. If the anomaly was reported by an external source, analysts try to determine the trustworthiness of that source and begin to identify internal and external corroborating sources. Anomalies that are verified may require in-depth investigation and incident handling or more observation through monitoring. 2. Investigating the incident to identify the type of cyber attack, estimate impacts, and collect evidence Once the anomaly is verified as a potential, impending, or occurring incident, analysts should combine information from multiple sources and/or perform investigative testing using available tools. Analysis often occurs through collaboration between analysts, the exchange of notifications and warnings, and the use of analytical research techniques. Analysts use these techniques to investigate the type of attack, its source (where it originates), its target (whom it affects), and the immediate risk to network assets and mission performance. In addition, these techniques are used to compile evidence for law enforcement. Techniques for investigation include comparing and correlating additional monitoring data available with the anomaly to determine what other internal and external entities are experiencing; comparing data about the anomaly with standardized databases to determine if the threats are known; and performing investigations, such as cyber forensic examinations, reverse engineering, malware analysis, and isolating anomalies in a test environment such as a honeypot or a sandbox. 3. Identifying possible actions to mitigate the impact of the Analysis should culminate in identifying essential details about an anomaly such as what specific vulnerabilities are exploited or what impacts are expected for a specific incident. Steps should then be taken to identify alternative courses of action to mitigate the risks of the incident according to the severity of the exploit, available resources, and mission priorities. Such steps may include isolating the affected system to prevent further compromise, disabling the affected service that is being exploited, or blocking the connections providing the attacker a route into the network environment. These courses of action may lead to more analysis or be used to support the warning capability. 4. Integrating results into predictive analysis of broader implications or potential future attacks Information resulting from analysis of an individual incident should be used to determine any broader implications and predict and protect against future threats. This type of effort, or predictive analysis, should look beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of importance. For example, it may include detailed trend analysis of threats that have occurred over a certain period of time that is issued in public reports that discuss current trends, predict future incident activity, or emerging attack methods. However, according to many experts, this type of predictive analysis is complex and it is still difficult to predict future threats with current data. Warnings are intended to alert entities to the presence of anomalies, help delineate the relevancy and immediacy of cyber attacks, provide information on how to remediate vulnerabilities and mitigate incidents, or make overall statements about the health and welfare of the Internet. Warning includes three key attributes: 1. Developing notifications that are targeted and actionable Warning messages should be targeted to the appropriate audience and provide details that are accurate, specific, and relevant enough to be acted upon. Developing actionable notifications requires providing the right incident information to the right person or group. If a single group is the only target of a threat, a warning directly to it may be more appropriate than a general public announcement. In addition, warnings are tailored to address technical or nontechnical recipients. Some warnings may be more appropriate for chief information officers, while other may include technical details for network administrators. Although notifications and warnings are delivered throughout incident handling, it is important to reach a balance between releasing actionable information and disclosing warnings too often, which can overwhelm the recipients and stretch limited resources. By addressing the specific audience, warnings avoid overwhelming recipients with extraneous or irrelevant information. Also, recipients of notifications and warnings need to be able to use them to protect or defend their networks against cyber attacks. For example, many organizations have designated thresholds that determine how and when warnings are issued. To do so, the messages must include specific and accurate information about the incident as it relates to the recipient’s monitoring, analysis, or response capabilities. An actionable warning may also include recommendations about how to respond to an incident. Federal and nonfederal entities also noted that sensitivity of information and privacy are key considerations when trying to develop an actionable warning. Warnings are sanitized or stripped of identifying or proprietary information in order to protect the privacy of individuals or entities involved in the incident. In addition, the federal government and its private sector partners must also adhere to procedures to make sure that they share useful information at the appropriate clearance level. 2. Providing notifications in a timely manner Warnings are intended to give information to recipients as early as possible—preferably in advance of a cyber attack—to give them time to take appropriate action. In addition, the National Institute of Standards and Technology (NIST) provides guidance to federal agencies that describes when incidents are considered reportable and how long they may take to report them to US-CERT. Similarly, several ISACs stated that they have procedures that determine when and how warnings are issued and when and how members should report incidents. 3. Distributing notifications using the most appropriate communications methods Once a warning is developed, it is important to determine the best method for getting that message out without overwhelming the public or incident handlers. Warnings can be provided both informally and formally. Informal warnings between colleagues with established trusted relationships can happen quickly and without significant regard to the organizational structure. Formal warnings, which are typically held to a higher standard of accuracy by recipients than informal warnings, come in many forms, such as e-mail bulletins, vulnerability alerts, Web postings, targeted warnings to a specific entity, or broad security notices to the general public. In addition to specific formal warnings, operations centers that perform analysis and warning for multiple organizations, such as the ISACs and commercial vendors, use level-based or color-coded alert systems on their Web sites to quickly notify members and the public of the general threat status of the infrastructure or Internet. Changing from one level or color to another indicates that the threat level is increasing or decreasing. These same organizations send alerts about threats and vulnerabilities to members only or may issue specific warnings to a single organization that has been identified through analysis as being targeted by a cyber threat. Response includes actions to contain an incident, manage the protection of network operations, and recover from damages when vulnerabilities are revealed or when cyber incidents occur. It encompasses three key attributes: 1. Containing and mitigating the incident When an incident is identified, immediate steps should be taken to protect network assets. Decisions are made to control further impacts on the network and then eliminate the threat. These actions may include installing a software patch, blocking a port known to be used by a particular threat, or deploying other appropriate network resources. In the case of a serious threat, the decision may be to turn off the network gateway and temporarily isolate the network from the Internet, depending upon what assets are at risk. One industry expert noted that investigation may occur before any mitigation steps are taken in order to consider the necessity of law enforcement involvement. On the other hand, if little is known about a threat and it does not appear to endanger critical assets, a decision might be made to watch the threat emerge in a contained area to allow for further monitoring and analysis. Decisions to act or not are based on acceptable risks, available resources, and ability to remedy the known threat. In addition, decisions must be made in the context of the impact that actions will have on other related efforts, such as a law enforcement investigation. 2. Recovering from damage and remediating vulnerabilities Once an incident is contained and mitigated, restoring damaged areas of the network to return it to its baseline becomes a priority. To understand the damage, a cyber damage or loss assessment may be conducted to identify, among other things, how the incident was discovered, what network(s) were affected, when the incident occurred, who attacked the network and by what methods, what was the intention of the attacker, what occurred during the attack, and what is the impact or severity of the incident. The recovery efforts may involve restoring or reinstalling computers, network devices, applications, or systems that have been compromised. Taking action to remediate vulnerabilities in a network may also result from analysis and incident management. Entities work to discover and reduce the number of vulnerabilities in their computers, network devices, applications, or systems. 3. Evaluating actions and incorporating lessons learned Entities should ensure that threat data, results, and lessons learned are evaluated and appropriately incorporated to improve the overall cyber analysis and warning capability. For example, teams can be used to simulate network threats by purposefully attacking a network in order to see how the network responds. From these simulations, an evaluation can be made about the response, and recommendations on how to improve can be developed. In addition, cyber simulations allow critical infrastructure organizations to prepare for threat scenarios and to test analysis, warning, and response capabilities. NIST guidance also states that holding lessons learned meetings after major incidents is helpful in improving security measures and the incident handling process itself. US-CERT has established cyber analysis and warning capabilities that include aspects of each of the key attributes. However, they do not fully incorporate all of them. US-CERT has established capabilities that include aspects of key attributes of monitoring. For example, it obtains internal network operation information via technical tools and Einstein; obtains external information on threats, vulnerabilities, and incidents; and detects anomalous activities based on the information it receives. However, its capabilities do not fully incorporate all of the key attributes of monitoring. For example, it has not established a baseline of our nation’s critical infrastructure information systems. Table 6 shows our analysis of its monitoring capability. As part of the President’s Cyber Initiative, DHS has a lead role for several provisions that, if implemented appropriately, could address key monitoring deficiencies, such as not having a comprehensive national baseline and sufficient external information on threats, vulnerabilities, and incidents. According to testimony by the Under Secretary for the National Protection and Programs Directorate, the initiative makes the Einstein program mandatory across all federal agencies. In addition, DHS plans to enhance Einstein’s capabilities to be a real-time intrusion detection and situational awareness system. Further, DHS, along with the Office of Management and Budget (OMB), is responsible for working with federal agencies to reduce the number of Trusted Internet Connections used by the federal government. According to DHS and OMB officials, these initiatives will enhance the ability of the US-CERT to monitor federal systems for cyber attacks and other threats. According to US-CERT officials, the reduction in Trusted Internet Connections, along with the positioning of Einstein in front of those connections to the Internet, will help provide a governmentwide baseline and view of the traffic entering and leaving federal networks as well as access to the content of the traffic. In addition, according to the Assistant Secretary for Cybersecurity and Communications, the recently announced National Cybersecurity Center, which reports directly to the Secretary of Homeland Security, will be responsible for ensuring coordination among the cyber-related efforts across the federal government, including improving the sharing of incident and threat information. However, the efforts to use Einstein, reduce Internet connections, and implement the National Cybersecurity Center are in their early stages and have not yet been fully planned or implemented, so whether these efforts will fully address all five of the monitoring attributes is not known at this time. US-CERT has established capabilities that include key attributes of analysis. For example, it verifies anomalies, performs investigations, and identifies possible courses of action. However, its capabilities do not fully incorporate other attributes because of technical and human resource constraints and the gaps in the monitoring capability. Table 8 shows our analysis of the organization’s analysis capability. As part of the Cyber Initiative, the organization has received additional resources to develop the next version of the Einstein situational awareness tool. According to US-CERT officials, this new version, referred to as Einstein 2.0, will provide real-time intrusion detection monitoring, a content analysis capability, and automated analysis functions that are currently manual. In addition, it has received authorization for an additional 30 government and 50 contractor employee full-time equivalents. According to US-CERT officials, they plan to fill the additional positions by leveraging graduates of the Scholarship for Service program, which provides cybersecurity-related scholarships to students willing to serve the federal government for a time commitment. However, these efforts are in their early stages and have not yet been fully planned or implemented. Consequently, whether these efforts will fully address all four of the analysis attributes is not known at this time. The organization has established capabilities that include key attributes of warning. For example, it develops and distributes a number of attack and other notifications targeted to different audiences with varying frequency. However, according to customers, these warning products are not consistently actionable and timely. Table 8 shows our analysis of the organization’s warning capability. Tables 9 and 10 show types of warning products and the quantity of products issued during fiscal year 2007. (This page left blank intentionally) As part of the Cyber Initiative, the enhancements to the Einstein program, as well as the reduction in the number of Trusted Internet Connections can lead to more complete data. According to US-CERT officials, the improved data will lead to an enhanced warning capability that could provide the ability to issue targeted and actionable alerts in advance of actual cyber attacks. However, these efforts are in their early stages and have not yet been fully planned or implemented; thus, it is not clear whether these efforts will fully address the three warning attributes. US-CERT possesses a limited response capability to assist other entities in the containment, mitigation, and recovery from significant cyber incidents. For example, while it provides on-site assistance to various entities, its ability to provide response at the national level is hindered by limitations in the resources available and authority over affected entities. Table 11 shows our analysis of its response capability. To improve the organization’s response capability, US-CERT officials stated that they needed to perform internal exercises that test its national- level response capability more often than every 2 years, as is the case with the Cyber Storm exercise. It plans to develop “tabletop” exercises to more frequently test its response capabilities. In addition, according to NCSD officials, they are working collaboratively with other federal and nonfederal working groups to improve their performance measures so that they can understand the value and use of their products and make continuous improvements. However, until they do so, it is not clear whether these efforts will lead to US-CERT fully addressing the three response attributes. US-CERT faces a number of newly identified and ongoing challenges that impede it from fully implementing the key attributes and in turn establishing cyber analysis and warning capabilities essential to coordinating the national effort to prepare for, prevent, and respond to cyber threats. The new challenge is creating warnings that are actionable and timely—it does not consistently issue warning and other notifications that its customers find useful. In addition, US-CERT continues to face four challenges that we previously identified: (1) employing predictive cyber analysis, (2) developing more trusted relationships to encourage information sharing, (3) having sufficient analytical and technical capabilities, and (4) operating without organizational stability and leadership within DHS. Until DHS addresses these challenges and fully incorporates all key attributes into its capabilities, it will not have the full complement of cyber analysis and warning capabilities essential to effectively performing its national mission. Developing and disseminating cyber threat warnings to enable customers to effectively mitigate a threat in advance of an attack can be challenging for the US-CERT. According to the organization’s Acting Deputy Director, it serves as the nation’s cyber analysis and warning center and must ensure that its warnings are accurate. In addition, owners of classified or law enforcement information must review and agree to the release of related information. Therefore, the organization’s products are subjected to a stringent review and revision process that could adversely affect the timeliness of its products—potentially adding days to the release if classified or law enforcement information must be removed from the product. For example, an official from a cybersecurity-focused organization at a university stated that the alerts from US-CERT generally arrive a day or two after they might have been helpful. An official from another private entity stated that the bureaucratic process US-CERT must follow prevents it from providing useful alerts in a timely manner and that as a result, it does not have the credibility to drive a reaction when an alert is finally issued. Another private sector official stated that, in some cases, the organization gets information on cyber incidents and attacks faster from media sources than US-CERT because its analysts need time to verify the reliability of the data they receive. In addition, according to federal officials responsible for determining cyber-related threats, US-CERT, as well as other organizations with cybersecurity-related responsibilities, must also balance the need to develop and release warnings with the activities of other organizations, such as law enforcement and intelligence support, to identify and mitigate cyber threats. For example, the release of a warning to address a threat or attack may also alert the intruders that their methods have been discovered and cause them to change their methods prior to the completion of an investigation about their activities. Further, when there is sensitive information to share, US-CERT officials stated that on numerous occasions, they were unable to share the details of threats to customers’ networks because no one within the federal agency or nonfederal entity possessed a security clearance high enough to receive the information. In some organizations, the individuals who do possess security clearances are in the upper echelons of the organization and do not possess a cyber or information security background. As a result, they are not always able to accurately comprehend and relay the threat information to those who would actually handle the mitigation efforts. In September 2007, we reported that DHS lacked a rapid, efficient process for disseminating sensitive information to private industry owners and operators of critical infrastructures. We recommended that DHS establish a rapid and secure process for sharing sensitive vulnerability information with critical infrastructure stakeholders, including vendors, owners, and operators; however, DHS has not yet fulfilled this recommendation. To provide actionable information to its customers, the organization attempts to combine incident information with related cyber threat information to determine the seriousness of the attack. However, according to the Acting Director of US-CERT, its efforts are limited by other federal entities’ abilities to determine specific cyber threats to the nation’s critical infrastructure. One reason for the lack of cyber threat data is that the task is complex and difficult and there are no established, generally accepted methodologies for performing such analysis. In addition, such entities are hampered by the limited number of analysts dedicated to cyber threat identification. For example, in January 2008, the Director of HITRAC stated that only 5 percent of HITRAC’s total number of analyst positions was focused on analyzing and identifying cyber threats to our nation’s critical information infrastructure. According to the director, it had received approval to double the number of cyber-related analysts and was in the process of filling those positions. In addition, the director stated that HITRAC’s primary focus is on identifying physical threats. US-CERT faces ongoing challenges that we identified in previous reports as impeding DHS’s ability to fulfill its cyber critical infrastructure protection responsibilities. Employing predictive cyber analysis—US-CERT has been unable to establish the solid foundation needed to perform predictive cyber analysis that would enable it to determine any broader implications from ongoing network activity, predict or protect against future threats, or identify emerging attack methods prior to an attack. Since 2001, we have identified the challenges associated with establishing strategic, predictive analysis and warning and have made recommendations that responsible executive branch officials and agencies establish such capabilities, including developing methodologies. According to the Acting Director of US-CERT, it has not been able to establish such capabilities because there is not a generally accepted methodology for performing predictive cyber analysis and warning. In addition, officials from US-CERT and other federal and nonfederal entities with cyber analysis and warning capabilities stated that while they can determine the motivations for the various threat sources to use cyber attacks, it is a formidable task to foresee prior to attacks how those threats would actually conduct attacks and to establish indicators to recognize that such cyber attacks are about to occur. Also, the relative newness of the cyber analysis and warning discipline and immaturity of the related methodologies and tools add to the complexity. Developing more trusted relationships to encourage information sharing—Implementing cyber analysis and warning capabilities, including all of the key attributes, requires that entities be willing and able to share information, including details about incidents, threats, vulnerabilities, and network operations. However, US-CERT continues to be challenged to develop relationships with external sources that would encourage information sharing. For example, nonfederal entities do not consistently fully disclose incident and other data—they filter sensitive details from the data reported, thus reducing its value to US-CERT. The lack of such relationships negatively affects the organization’s cyber analysis and warning capability. In 2005, we reported that entities within critical infrastructure sectors possess an inherent disincentive to share cybersecurity information with DHS. Much of their concern was that the potential release of sensitive information could increase the threat they face. In addition, when information was shared, it was not clear whether the information would be shared with other entities, such as other federal entities, state and local entities, law enforcement, or various regulators, or how it would be used or protected from disclosure. Alternatively, sector representatives expressed concerns that DHS was not effectively communicating information with them and had not matched private sector efforts to share valuable information with a corresponding level of trusted information sharing. We also identified information sharing in support of homeland security as a high-risk area in 2005, and we noted that establishing an effective two-way exchange of information to help detect, prevent, and mitigate potential terrorist attacks requires an extraordinary level of cooperation and perseverance among federal, state, and local governments and the private sector. Federal and nonfederal officials raised similar concerns about the ability to develop trusted relationships and share information with and between cyber analysis and warning entities, including US-CERT. For example, frequent staff turnover at NCSD and US-CERT hindered the ability to build trusted relationships with both public and private entities. Federal and nonfederal officials stated that reliance was placed on personal relationships to support sharing of sensitive information about cybersecurity and cyber incidents. However, according to the NCSD director, six senior staff members within the Office of Cybersecurity and Communications (the national focal point for addressing cybersecurity issues) were leaving for various reasons, affecting the ability to develop such relationships. In addition, private sector officials stated that their organizations continued to be hesitant to share information on vulnerabilities and threats because of the fear that such sharing might negatively affect their financial bottom line. For example, private sector officials stated that it was difficult to share unfiltered information with their respective infrastructure sector ISAC because a competitor operated the ISAC, thus negatively affecting the information received by US-CERT. Having sufficient analytical and technical capabilities—Obtaining and retaining adequately trained cyber analysts and acquiring up-to-date technological tools to implement the analysis capability attributes is an ongoing challenge to US-CERT and other analysis and warning centers, hindering their ability to respond to increasingly fast, nimble, and sophisticated cyber attacks. As we have reported, NCSD has had difficulty hiring personnel to fill vacant positions. We reported that once it found qualified candidates, some candidates decided not to apply or withdrew their applications because it took too long to be hired. This is still a concern because current staff has limited organizational backup and, in some cases, performs multiple roles. In addition, a private sector official stated that it is not clear whether or not the government has the number of technical analysts necessary to perform analysis on large and complex data sets that are generated whether or not an incident is in progress or not. Keeping cyber analysts trained and up to date on the latest cybersecurity tools and techniques can be difficult. For example, a DOD official representing one of its cyber analysis and warning centers stated that its analysts must develop their expertise on the job because there is no formal training program available that teaches them how to detect and perform analysis of an anomaly or intrusion. A private sector official stated that while analysts are often trained to use existing tools, their understanding of the key attributes of analysis is often limited, resulting in a solution too late to be helpful. Analysts also need the appropriate technological tools to handle the volume, velocity, and variety of malicious data and activity they are faced with, according to federal officials. For example, although the Einstein flow data are collected in real time, the actual analysis is manually intensive and does not occur simultaneously or in real time. Another limiting factor of Einstein data is that US-CERT is unable to analyze the content of the potentially malicious traffic and must rely on the affected agency to perform any analysis of the content of the traffic. Thus both the reaction time to determine the intent of the anomalous activity and the necessary actions to address it are significantly slowed. In addition, officials from one private sector entity questioned if agencies can sufficiently protect their networks using the tools they are mandated to use. As part of the efforts to address the President’s Cyber Initiative, US-CERT recently received approval to fill 80 new positions—30 government and 50 contractor—and is attempting to fill these analytical positions by extending offers to candidates in the National Science Foundation’s Scholarship for Service Program. However, these positions have yet to be completely filled with qualified candidates. Operating without organizational stability and authority—We have identified challenges regarding DHS’s organizational stability, leadership, and authority that affect US-CERT’s ability to successfully perform its mission. In the past, we have reported that the lack of stable leadership has diminished NCSD’s ability to maintain trusted relationships with its infrastructure partners and has hindered its ability to adequately plan and execute activities. While DHS has taken steps to fill key positions, organizational instability among cybersecurity officials continues to affect NCSD and thus US-CERT. For example, at least six senior staff members were leaving DHS’s Office of Cybersecurity and Communications, including the NCSD Director. Losing senior staff members in such large numbers has negatively affected the agency’s long-term planning and hampered the ability of NCSD/US-CERT to establish trusted relationships with public and private entities and to build adequate functions to carry out its mission, including expanded cyber analysis and warning capabilities, according to the official. Furthermore, when new senior leadership has joined DHS, NCSD/US-CERT’s objectives were reassessed and redirected, thus affecting NCSD’s ability to have a consistent long-term strategy, according to the former official. For example, senior officials wanted to broaden the role and focus of US-CERT by having it provide centralized network monitoring for the entire federal government on a 24-hour-a-day, 7-day-a- week basis. However, the Director of NCSD disagreed with this strategy, stating that each federal agency should have its own 24-hour-a-day, 7-day- a-week incident-handling capability (either in-house or contracted out) to respond to incidents affecting its own network. He viewed US-CERT as a fusion center that would provide analysis and warning for national-level incidents, support federal agency incident-handling capabilities during crisis situations, and offer a mechanism for federal agencies to coordinate with law enforcement. The organization’s future position in the government’s efforts to establish a national-level cyber analysis and warning capability is uncertain. Specifically, Homeland Security Presidential Directive 23, which is classified, creates questions about US-CERT’s future role as the focal point for national cyber analysis and warning. In addition, DHS established a new National Cybersecurity Center at a higher organizational level, which may diminish the Assistant Secretary of Cyber Security and Communications’ authority as the focal point for the federal government’s cybersecurity-related critical infrastructure protection efforts, and thus US-CERT’s role as the central provider of cyber analysis and warning capabilities across federal and nonfederal critical infrastructure entities. As stated above, we did not make new recommendations in 2005 regarding cyber analysis and warning because our previous recommendations had not yet been fully implemented. At the time, we did recommend that the Secretary of Homeland Security require NCSD to develop a prioritized list of key activities for addressing the underlying challenges related to information sharing, hiring staff with appropriate capabilities, and organizational stability and authority. In addition, we recommended that performance measures and milestones for performing activities to address these challenges be identified. However, since that time, DHS has not provided evidence that it has taken actions on these activities. In seeking to counter the growing cyber threats to the nation’s critical infrastructures, DHS has established a range of cyber analysis and warning capabilities, such as monitoring federal Internet traffic and the issuance of routine warnings to federal and nonfederal customers. However, while DHS has actions under way aimed at helping US-CERT better fulfill attributes identified as critical to demonstrating a capability, US-CERT still does not exhibit aspects of the attributes essential to having a truly national capability. It lacks a comprehensive baseline understanding of the nation’s critical information infrastructure operations, does not monitor all critical infrastructure information systems, does not consistently provide actionable and timely warnings, and lacks the capacity to assist in mitigation and recovery in the event of multiple, simultaneous incidents of national significance. Planned actions could help to mitigate deficiencies. For example, as part of the Cyber Initiative, US-CERT plans to enhance its Einstein situational awareness tool so that it has real-time intrusion detection monitoring, a content analysis capability, and automated analysis functions. By placing the tool in front of Trusted Internet Connections, officials expect to obtain a governmentwide baseline view of the traffic and content entering and leaving federal networks. US-CERT also plans to hire 80 additional cyber analysts and to increase the frequency of exercises that test its national- level response capability. However, at this point, it is unclear whether these actions will help US- CERT—or whatever organizational structure is ultimately charged with coordinating national cyber analysis and warning efforts—achieve the objectives set forth in policy. DHS faces a number of challenges that impede its ability to achieve its objectives, including fostering trusted relationships with critical infrastructure sectors, hiring and retaining skilled cyber analysts, ensuring that US-CERT warning products provide useful information in advance of attacks, enhancing predictive analysis, and ensuring that any changes brought about by HSPD 23 are marked by well-defined and transparent lines of authority and responsibility. We identified most of these challenges in our prior reviews and made broad recommendations to address them. DHS’s actions to address these challenges have not been adequate. Because of this, addressing these challenges is as critical as ever to overcome the growing and formidable threats against our nation’s critical cyber infrastructure. If these challenges are not addressed, US-CERT will not be able to provide an effective national cyber analysis and warning capability. We recommend that the Secretary of Homeland Security take four actions to fully establish a national cyber analysis and warning capability. Specifically, the Secretary should address deficiencies in each of the attributes identified for monitoring, including establish a comprehensive baseline understanding of the nation’s critical information infrastructure and engage appropriate nonfederal stakeholders to support a national-level cyber monitoring capability; analysis, including expanding its capabilities to investigate incidents; warning, including ensuring consistent notifications that are targeted, actionable, and timely; and response, including ensuring that US-CERT provides assistance in the mitigation of and recovery from simultaneous severe incidents, including incidents of national significance. We also recommend that the Secretary address the challenges that impede DHS from fully implementing the key attributes, including the following 6 items: engaging appropriate stakeholders in federal and nonfederal entities to determine ways to develop closer working and more trusted relationships; expeditiously hiring sufficiently trained cyber analysts and developing strategies for hiring and retaining highly qualified cyber analysts; identifying and acquiring technological tools to strengthen cyber analytical capabilities and handling the steadily increasing workload; developing predictive analysis capabilities by defining terminology, methodologies, and indicators, and engaging appropriate stakeholders in other federal and nonfederal entities; filling key management positions and developing strategies for hiring and retaining those officials; and ensuring that there are distinct and transparent lines of authority and responsibility assigned to DHS organizations with cybersecurity roles and responsibilities, including the Office of Cybersecurity and Communications and the National Cybersecurity Center. In written comments on a draft of this report (see app. II), signed by the Director of DHS’s GAO/OIG Liaison Office, the department concurred with 9 of our 10 recommendations. It also described actions planned and under way to implement the 9 recommendations. In particular, the department said that to fully establish a cyber analysis and warning capability, it plans to continue expansion of the Einstein intrusion detection system and increase US-CERT’s staffing. In addition, to address the challenges that impede DHS from fully implementing key cyber analysis and warning attributes, the department stated that it plans to continue to build new relationships and grow existing ones with stakeholders. Further, to strengthen its analysis and warning capability and develop its predictive analysis capability, the department cited, among other things, its planned implementation of an upgraded version of Einstein. DHS took exception to our last recommendation, stating that the department had developed a concept-of-operations document that clearly defined roles and responsibilities for the National Cybersecurity Center and NCSD. However, this concept-of-operations document is still in draft, and the department could not provide a date for when the document would be finalized and implemented. DHS also commented on the report’s description of US-CERT as “the center.” Specifically, DHS was concerned that referring to US-CERT as the center might lead to confusion with the department’s newly established National Cybersecurity Center. DHS requested that we remove references to US-CERT as the center. We agree with this comment and have incorporated it in the report where appropriate. In addition to its written response, the department provided technical comments that have been incorporated in the report where appropriate. We also incorporated technical comments provided by other entities involved in this review. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Homeland Security, and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Powner at (202) 512-9286, or [email protected], or Dr. Nabajyoti Barkakati at (202) 512-4499, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Our objectives were to (1) identify key attributes of cyber analysis and warning capabilities, (2) compare these attributes with the United States Computer Emergency Readiness Team’s (US-CERT) current analysis and warning capabilities to identify whether there are gaps, and (3) identify US-CERT’s challenges to developing and implementing key attributes and a successful national cyber analysis and warning capability. To identify key attributes of cyber analysis and warning capabilities, we identified entities based on our previous work related to cyber critical infrastructure protection, information security, and information sharing and analyzed relevant laws, strategies, and policies. In addition, we solicited suggestions from a variety of sources familiar with cyber analysis and warning organizations, including GAO’s chief information technology officer and members of our Executive Council on Information Management and Technology, which is a group of executives with extensive experience in information technology management who advise us on major information management issues affecting federal agencies. On the basis of the entities identified, we selected those that were relevant and agreed to participate. We then gathered and analyzed policies, reports, and surveys; made site visits to observe the operation of cyber analysis and warning capabilities; conducted structured interviews; and received written responses to structured interview questions. These activities were performed, as appropriate, at the following entities: Department of Defense: Commander and Deputy Commander of the Joint Task Force—Global Network Operations and Director of the Defense Information Systems Agency; Commanding Officer, Navy Cyber Defense Operations Command; Chief Information Officer and Electronic Data Service officials of the Navy’s Global Network Operations Center. We also toured the Joint Task Force’s Global Network Operations Center; the Navy’s Cyber Defense Operation Command Center; and the Navy Marine Corps Intranet Network’s Operations Center, Computer Incident Response Team Laboratory, Request Management Center, and Enterprise Global Networks Operations Center. Department of Energy: the Associate Chief Information Officer for Cyber Security for the Department of Energy and other relevant officials, and the Chief Information Officer of the National Nuclear Security Administration and other relevant officials. Department of Homeland Security: the Director of the National Cyber Security Division, the Acting Director of the National Cyber Security Division, and the Acting Director of US-CERT. National Institute of Standards and Technology: the Director of the Information Technology Laboratory and officials from the Information Technology Laboratory’s Computer Security Division. Private sector: Carnegie Mellon University’s CERT® Coordination Center, Internet Storm Center, LUMETA, Microsoft, MITRE, National Association of State Chief Information Officers, SANS Institute, SRI International, and Symantec. Information sharing and analysis centers representing the following sectors: financial services, information technology, states, surface transportation, and research and education. Federal agencies in the intelligence community. On the basis of the evidence gathered and our observations regarding each entity’s capabilities and operations, we determined the key common attributes of cyber analysis and warning capabilities. To verify the attributes we identified, we solicited comments from each entity regarding the attributes identified and incorporated the comments as appropriate. To determine US-CERT’s current national analysis and warning capabilities and compare them with the attributes identified to determine whether there were any gaps, we gathered and analyzed a variety of US- CERT policies, procedures, and program plans to identify the organization’s key activities related to cyber analysis and warning. We also observed US-CERT operations. In addition, we held interviews with key US-CERT officials, including the Director and Acting Director of the National Cyber Security Division, the Acting Director and Deputy Director of the US-CERT, and other relevant officials, to further clarify and confirm the key initiatives we identified through our analysis of the aforementioned documents. In addition, we interviewed the Director of Intelligence for the Department of Homeland Security’s Homeland Infrastructure Threat and Risk Analysis Center to determine that organization’s interaction with US-CERT and its role regarding identifying cyber threats. We also interviewed the Deputy Director of the Department of Homeland Security’s National Cybersecurity Center to obtain information about its concept-of-operations document. We then compared those activities to the key attributes of cyber analysis and warning capabilities in order to determine US-CERT’s ability to provide cyber analysis and warning and identify any related gaps. To identify US-CERT’s challenges to developing and implementing the key attributes and a successful national cyber analysis and warning capability, we gathered and analyzed relevant documents, such as past GAO reports and studies by various cybersecurity-related entities, and interviewed key federal and nonfederal officials regarding the challenges associated with cyber analysis and warning. On the basis of the information received and our knowledge of the issues, we determined the major challenges to developing and implementing the key attributes and a successful national cyber analysis and warning capability. We performed this performance audit between June 2007 and July 2008 in the Washington, D.C., metropolitan area; Atlanta, Georgia; Bloomington, Indiana; Pittsburgh, Pennsylvania; and Norfolk, Virginia; in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the persons named above, Neil Doherty, Michael Gilmore, Barbarol James, Kenneth A. Johnson, Kush K. Malhotra, Gary Mountjoy, Jennifer Stavros-Turner, and Amos Tevelow made key contributions to this report.
|
Cyber analysis and warning capabilities are critical to thwarting computer-based (cyber) threats and attacks. The Department of Homeland Security (DHS) established the United States Computer Emergency Readiness Team (US-CERT) to, among other things, coordinate the nation's efforts to prepare for, prevent, and respond to cyber threats to systems and communications networks. GAO's objectives were to (1) identify key attributes of cyber analysis and warning capabilities, (2) compare these attributes with US-CERT's current capabilities to identify whether there are gaps, and (3) identify US-CERT's challenges to developing and implementing key attributes and a successful national cyber analysis and warning capability. To address these objectives, GAO identified and analyzed related documents, observed operations at numerous entities, and interviewed responsible officials and experts. Cyber analysis and warning capabilities include (1) monitoring network activity to detect anomalies, (2) analyzing information and investigating anomalies to determine whether they are threats, (3) warning appropriate officials with timely and actionable threat and mitigation information, and (4) responding to the threat. GAO identified 15 key attributes associated with these capabilities. While US-CERT's cyber analysis and warning capabilities include aspects of each of the key attributes, they do not fully incorporate all of them. For example, as part of its monitoring, US-CERT obtains information from numerous external information sources; however, it has not established a baseline of our nation's critical network assets and operations. In addition, while it investigates if identified anomalies constitute actual cyber threats or attacks as part of its analysis, it does not integrate its work into predictive analyses. Further, it provides warnings by developing and distributing a wide array of notifications; however, these notifications are not consistently actionable or timely. US-CERT faces a number of newly identified and ongoing challenges that impede it from fully incorporating the key attributes and thus being able to coordinate the national efforts to prepare for, prevent, and respond to cyber threats. The newly identified challenge is creating warnings that are consistently actionable and timely. Ongoing challenges that GAO previously identified, and made recommendations to address, include employing predictive analysis and operating without organizational stability and leadership within DHS, including possible overlapping roles and responsibilities. Until US-CERT addresses these challenges and fully incorporates all key attributes, it will not have the full complement of cyber analysis and warning capabilities essential to effectively performing its national mission.
|
In October 2004, Congress included a provision in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 that required the Secretary of Defense to develop a comprehensive policy for DOD on the prevention of and response to sexual assaults involving members of the Armed Forces. The legislation required that the department’s policy be based on the recommendations of the Department of Defense Task Force on Care for Victims of Sexual Assaults and on such other matters as the Secretary considered appropriate. Among other things, the legislation required DOD to establish a standardized departmentwide definition of sexual assault; establish procedures for confidentially reporting sexual assault incidents; and submit an annual report to Congress on reported sexual assault incidents involving members of the Armed Forces. In October 2005, DOD issued DOD Directive 6495.01, which contains its comprehensive policy for the prevention of and response to sexual assault, and in June 2006 it issued DOD Instruction 6495.02, which provides guidance for implementing its policy. DOD’s directive defines sexual assault as “intentional sexual contact, characterized by the use of force, physical threat or abuse of authority or when the victim does not or cannot consent. It includes rape, nonconsensual sodomy (oral or anal sex), indecent assault (unwanted, inappropriate sexual contact or fondling), or attempts to commit these acts. Sexual assault can occur without regard to gender or spousal relationship or age of victim. “Consent” shall not be deemed or construed to mean the failure by the victim to offer physical resistance. Consent is not given when a person uses force, threat of force, coercion, or when a victim is asleep, incapacitated, or unconscious.” The Under Secretary of Defense for Personnel and Readiness has the responsibility for developing the overall policy and guidance for the department’s sexual assault prevention and response program. Under the Office of the Under Secretary of Defense for Personnel and Readiness, DOD’s Sexual Assault Prevention and Response Office (within the Office of the Deputy Under Secretary of Defense for Plans) serves as the department’s single point of responsibility for sexual assault policy matters. These include providing the military services with guidance, training standards, and technical support; overseeing the department’s collection and maintenance of data on reported sexual assaults involving servicemembers; establishing mechanisms to measure the effectiveness of the department’s sexual assault prevention and response program; and preparing the department’s annual report to Congress. In DOD, active duty servicemembers have two options for reporting a sexual assault: (1) restricted, and (2) unrestricted. The restricted reporting option permits a victim to confidentially disclose an alleged sexual assault to select individuals and receive care without initiating a criminal investigation. A restricted report may only be made to a Sexual Assault Response Coordinator (SARC), victim advocate, or medical personnel. Because conversations between servicemembers and chaplains are generally privileged, a victim may also confidentially disclose an alleged sexual assault to a chaplain. In contrast, the unrestricted reporting option informs the chain of command of the alleged sexual assault and may initiate an investigation by the military criminal investigative organization of jurisdiction. Prior to December 2007, the Coast Guard only offered an option that would enable servicemembers to confidentially disclose an incident of sexual assault at the Coast Guard Academy. However, since then the Coast Guard has employed Coast Guard-wide a definition of sexual assault similar to DOD’s as well as similar options for reporting a sexual assault in its guidance, Commandant Instruction 1754.10C. Under the Coast Guard’s instruction, however, if the chain of command learns of an alleged sexual assault they are required to notify the Coast Guard’s criminal investigative organization, the Coast Guard Investigative Service, which will initiate an investigation or inquiry. At the installation level, the coordinators of the sexual assault prevention and response programs are known as SARCs in DOD and as Employee Assistance Program Coordinators (EAPC) in the Coast Guard. Other responders include victim advocates, judge advocates, medical and mental health providers, criminal investigative personnel, law enforcement personnel, and chaplains. DOD has taken positive steps to respond to congressional direction by establishing policies and a program to prevent, respond to, and resolve reported sexual assault incidents involving servicemembers and the Coast Guard, on its own initiative, has taken similar steps; however, DOD’s guidance may not address some important issues. Further, implementation of the programs is hindered by several factors, including (1) inconsistent support for the programs, (2) training that is not consistently effective, and (3) limited access to mental health services. In response to statutory requirements and recommendations from the Department of Defense Care for Victims of Sexual Assaults Task Force, DOD has established a program to prevent, respond to, and resolve sexual assaults involving servicemembers. DOD’s policy and implementing guidance for its program are contained in DOD Directive 6495.01 and DOD Instruction 6495.02, respectively. Specific steps that DOD has taken include: establishing a standardized departmentwide definition of sexual assault; establishing a confidential option to report sexual assault incidents, known as restricted reporting; establishing a Sexual Assault Prevention and Response Office to serve as the single point of accountability for sexual assault prevention and response; requiring the military services to develop and implement their own policies and programs, based on DOD’s policy, to prevent, respond to, and resolve sexual assault incidents; establishing training requirements for all servicemembers on preventing and responding to sexual assault; and reporting data on sexual assault incidents to Congress annually. Although not explicitly required by statute, the Coast Guard has had a sexual assault prevention and response program in place since 1997. In December 2007, the Coast Guard on its own initiative updated its instruction to mirror DOD’s policy and to include a restricted option for reporting sexual assaults. In DOD, each of the military services has also established a Sexual Assault Prevention and Response Office with responsibility for overseeing and managing sexual assault matters within that military service. The Coast Guard’s Office of Work Life (within the Health, Safety and Work Life Directorate, which is under the Assistant Commandant for Human Resources), is responsible for overseeing and managing sexual assault matters within the Coast Guard. While the establishment of DOD’s program represents a noteworthy step, DOD’s directive and instruction may not adequately address some important issues, such as how to implement the program when operating in a deployed environment or in joint environments. Program officials we met with overseas told us that DOD’s guidance does not sufficiently take into account the realities of operating in a deployed environment, in which unique living and social circumstances can heighten the risks for sexual assault and program resources are more widely dispersed than they are in the United States, which can make responding to a sexual assault challenging. One program official we met with overseas told us that his area of responsibility includes six to seven installations spread out over an area the size of New Jersey, constituting a geographic challenge in terms of responding to sexual assaults. At another installation, we found no criminal investigative presence, and program officials told us that it can take 48 hours or longer for the criminal investigative organization with jurisdiction to respond to some sexual assaults. Similarly, program officials told us there is a need for better coordination of resources when a sexual assault occurs in a joint environment. At one overseas installation we visited, Coast Guard members told us that they were confused about which program they fell under—DOD’s or the Coast Guard’s—and thus who they should report an alleged sexual assault to. We also found that installations can have multiple responders responsible for responding to an assault, potentially leading to further confusion. Concerns over implementing the sexual assault prevention and response program in joint environments are also highlighted in the department’s fiscal year 2007 annual report. For example, DOD noted the need to address challenges that arise in environments wherein two or more services are operating together, while the Army noted that challenges with joint environments have often resulted in unnecessary duplication of services and inconsistent application of policy with regard to sexual assault matters. Commanders in DOD and the Coast Guard have taken actions to address incidents of sexual assault and are generally supportive of sexual assault prevention and response programs; however, we found evidence that some commanders do not support the programs. In addition, implementation of the programs may be hindered at installations where key program coordinator positions are a collateral duty because servicemembers must balance their duties with mission-related priorities, especially in deployed environments. While commanders—that is, company and field grade officers—in DOD and the Coast Guard have taken actions to address incidents of sexual assault, we found evidence that some commanders do not support the programs. According to DOD’s instruction, commanders and other leaders are responsible for advocating a strong program and effectively implementing sexual assault prevention and response policies. The Coast Guard’s instruction similarly requires that commanders and other leaders ensure compliance with the Coast Guard’s policies and procedures. At the installations we visited, we found that commanders were supportive of addressing incidents of sexual assault. For example, commanders told us that they set a zero tolerance policy for incidents of sexual assault, communicated the respective policies at command briefings, understood their roles and responsibilities in supporting the programs, and understood the need to protect victims. The results of a nongeneralizable survey we conducted support these statements. For example, at the 14 installations where we administered our survey, the percentage of servicemembers who indicated they thought their direct supervisor (military or civilian) would address sexual assault, should it occur at their current location, ranged from 91 to 98 percent. However, we found evidence that some commanders do not support the programs. For example, at three of the installations we visited program officials told us of meeting with resistance from commanders when attempting to place, in barracks and work areas, posters or other materials advertising the program or the options for reporting a sexual assault. A victim advocate at one Navy installation we visited told us that her command did not support the program and that her command did not feel that servicemembers in the unit should be able to utilize DOD’s restricted reporting option. According to the individual, the command demonstrated its resistance by routinely taking down any posters advertising the unit’s victim advocates or DOD’s reporting options. In some cases, commanders we spoke with told us that they supported the programs but did not like the restricted reporting option because they felt it hindered their ability to protect members of the unit or discipline alleged offenders. Some program officials told us that some commanders do not support the programs because they do not understand them or do not consider sexual assault matters to be a priority in the military. The following are some examples of what we found: At some of the installations we visited, commanders we spoke with were unfamiliar with the options for reporting a sexual assault, mistakenly believing that servicemembers could use the restricted option and still report a sexual assault to them—that is, without their being obligated to then initiate an investigation. Army unit victim advocates at one location we visited told us that senior enlisted leaders tend to ignore sexual assault matters until they become public knowledge and affect the morale of the unit. Marine Corps unit victim advocates at one location we visited told us that some commanders do not want to hear from them or from junior enlisted Marines about sexual assault matters. At some of the installations we visited, program officials told us that some commanders of all-male units do not believe that sexual assault matters are a problem for their units or that the programs are relevant to their units. For example, a SARC at one installation we visited told us that some commanders from all-male units have prevented her from providing required training to the units. Commanders who do not emphasize and prioritize sexual assault prevention and response programs—including those in all-male units—or who do not understand the policies and procedures effectively limit servicemembers’ knowledge about the program and ability to exercise their reporting options. Consequently, sexual assault prevention and response program coordinators’ efforts to raise awareness at these installations may also be limited. Program officials told us they need sufficient resources to appropriately implement sexual assault prevention and response programs. However, there is no direct funding for sexual assault prevention and response programs at military installations. To fund the programs, funds from other installation programs need to be utilized. At some of the installations we visited, SARCs and other program officials told us that they lacked sufficient funding to promote the programs, train servicemembers, or otherwise raise servicemembers’ awareness of sexual assault matters. In such instances, program officials told us that they had to find creative ways to implement the programs. One Army SARC told us that because of limited funding she could not bring in any outside speakers during Sexual Assault Awareness Month and had to rely on donations from units to print posters advertising the program. Similarly, SARCs we met with in the Navy and Marine Corps told us that they had only limited resources to train servicemembers. In the Coast Guard, program officials told us that they were expected to comply with the Coast Guard’s instruction to provide training to victim support personnel and servicemembers. However, they were not provided with funding and did not know how they were going to meet the new requirements. Program coordinators who are not provided sufficient funding by their commands cannot ensure that their program is appropriately implemented. At the installations we visited, we found that commanders have taken action against alleged sexual assault offenders. In both DOD and the Coast Guard, commanders are responsible for discipline of misconduct, including sexual assault, and they have a variety of judicial and administrative options at their disposal. Commanders’ options are specified in the Uniform Code of Military Justice (UCMJ) and the Manual for Courts-Martial and include (1) trial by courts-martial, (2) nonjudicial punishment, and (3) administrative actions. At the installations we visited, commanders told us that they were supportive of the need to dispose of sexual assault cases and were generally familiar with the options available to them. For further information on the disposition of sexual assaults in DOD and the Coast Guard, see appendix IV. To implement the sexual assault prevention and response program at military installations, DOD and the services rely largely on SARCs. DOD’s instruction directs the military services to establish the position of the SARC and criteria for selecting them. However, DOD’s instruction leaves to the military services’ discretion whether these positions are filled by military members, DOD civilian employees, or DOD contractors, and thus whether SARCs perform their roles as full-time or collateral duties. We found that the military services are using a variety of models for staffing the SARC position. For example, at the installations we visited in the United States, the Army, Navy, and Air Force were using full-time civilian or contractor employees, while the Marine Corps was using both civilian and military servicemembers for whom the duty was collateral. At the installations we visited overseas, we found that the Army assigned this position to servicemembers as a collateral duty, the Navy assigned it to a full-time civilian employee, and the Air Force assigned it to servicemembers as a full-time duty. We found that the time and resources dedicated to implementing the sexual assault prevention and response program are more constrained where program coordinator positions are staffed by servicemembers for whom these duties are collateral. Program officials with whom we spoke told us that SARCs’ ability to effectively implement DOD’s program depended on whether they served in full-time or collateral-duty positions. For example, Army SARCs overseas told us that in addition to the sexual assault prevention and response program they are also responsible for supporting the Army’s Equal Opportunity program, and that when they handled an equal opportunity complaint or had other mission requirements, those became their full-time job. As a result, they had less time to support the sexual assault prevention and response program. DOD has not systematically evaluated its policy for staffing SARCs; however, without evaluating its policy and the services’ processes for filling the SARC position, DOD is hindered in its ability to ensure that the SARCs can effectively perform their function in managing the sexual assault prevention and response program. The 13 EAPC positions in the Coast Guard are staffed by full-time federal civilian employees who are responsible for simultaneously managing multiple work-life programs, including sexual assault prevention and response, for a designated geographic region. Officials in the Coast Guard’s Office of Work Life as well an EAPC with whom we met acknowledged that because of the number of programs they are responsible for managing, the EAPCs do not have the time to effectively launch and implement the Coast Guard’s sexual assault prevention and response program. As a result, these officials believe they will not be able to train servicemembers on the Coast Guard’s program, including the new restricted reporting option, while also providing assistance to victims and managing other work-life programs. Officials at Coast Guard headquarters estimate that they need an additional 13 EAPCs across the Coast Guard to address their workload requirements. Without evaluating its processes for staffing these positions, the Coast Guard is hindered in its ability to ensure that its sexual assault prevention and response program is effectively implemented. Although DOD and the Coast Guard require servicemembers to receive periodic training on their respective sexual assault prevention and response programs, our nongeneralizable survey and interviews and discussions with servicemembers and program officials revealed that not all servicemembers are receiving the required training, and some servicemembers who have received it nevertheless may not understand how to report a sexual assault using the restricted reporting option. To date, neither DOD nor the Coast Guard has evaluated the effectiveness of the training provided. Additionally, the military services are not consistently meeting DOD’s requirements for presenting training in specified formats to enable servicemembers to understand the nature of sexual assaults. Some servicemembers told us that the training they received was not engaging and, therefore, they did not pay attention; others told us that servicemembers do not always take the training seriously. Both DOD and the Coast Guard require that servicemembers receive sexual assault prevention and response training annually; however, our survey and discussions with servicemembers revealed that not all servicemembers are receiving this training. In response to statutory requirements, DOD has established requirements for servicemembers to receive periodic sexual assault prevention and response training. Specifically, DOD’s instruction requires servicemembers to receive sexual assault prevention and response training both annually and prior to deploying to locations outside of the United States. Although not statutorily required to do so, the Coast Guard has developed its instruction largely to reflect DOD’s policies, and also requires its members to receive training annually. DOD’s and the Coast Guard’s annual training is required to provide all servicemembers with information on their options for reporting a sexual assault and sexual assault issues, such as the meaning of consent, offender accountability, and victim care. At the seven installations where we administered our survey in the United States, our survey revealed that a majority, but not all, servicemembers are receiving required annual training on their respective sexual assault prevention and response programs. Specifically, as table 1 shows, the percentage of servicemembers we surveyed at seven installations in the United States who indicated they had received the required training in the preceding 12 months ranged from 61 to 88 percent. Our interviews and discussions with servicemembers and program officials also revealed that not all servicemembers had received the required annual training within the preceding 12 months. Such servicemembers incur the risks of not knowing how to mitigate the possibility of being sexually assaulted or how to seek assistance if needed, or risk reporting the assault in a way that limits their option to seek treatment while maintaining confidentiality. In some instances, we found that these servicemembers were aware that the training was required annually, but had not attended or received training within the preceding 12 months. For example, a junior officer at a Marine Corps installation told us that he had last received the required training 2 years earlier while stationed overseas, when the Marine Corps’ program was initially rolled out. He said that he had not received any subsequent training because he had likely “slipped through the cracks.” As another example, a junior enlisted servicemember at an Army installation told us that he had not received the required training within the preceding 12 months because he was on temporary duty assignment when his unit conducted the training. In other cases, we found that servicemembers were not familiar with the programs or had never received the required training. Program officials at the installations we visited told us that they face challenges in ensuring that all servicemembers receive the required training. For example, a SARC at an Army installation told us that while she informally tracks information on whether units have received required annual sexual assault prevention and response training, she has no way of knowing how many servicemembers within a unit have received it. She noted that tracking whether servicemembers have received required training is a unit-level responsibility. Her goal, which she noted is arbitrary, is to ensure that at least 80 percent of units have received this training. According to DOD’s instruction, military commanders, supervisors, and managers are responsible for the effective implementation of the services’ respective sexual assault prevention and response programs. However, we found evidence that not all commanders had received the required training or were familiar with the options for reporting a sexual assault. A senior officer at an Air Force installation told us that he had never received sexual assault prevention and response training, was not familiar with DOD’s options for reporting a sexual assault, and would encourage his servicemembers to address sexual assault matters by notifying their chain of command. Similarly, a senior officer at an Army installation we visited told us that he did not know about any other option for reporting a sexual assault other than by notifying their chain of command. With their commanders thus uninformed, servicemembers under their command might not be fully aware of their options for reporting a sexual assault. Also, at more than half of the installations we visited servicemembers and program officials told us that they believe commanders and other senior leaders do not always receive the required training, or if they do, do not understand the programs. For example, victim advocates at a Navy installation we visited told us that they do not believe many senior leaders receive required sexual assault prevention and response training. According to the victim advocates, leaders cannot support the program if they do not understand it. Servicemembers and program officials we spoke with also told us that problems occur when commanders and other senior leaders have not received the required training or are not familiar with the programs. For example, the SARC at one installation we visited told us that it is important that commanders receive the required training so that they understand what they can and cannot do with regard to sexual assault matters. According to the official, commanders who have not received training and are not informed about the program sometimes take incorrect actions, such as initiating their own investigations of allegations of sexual assault made using the restricted reporting option. In addition to its annual training requirement, DOD, though not the Coast Guard, requires that servicemembers receive sexual assault prevention and response training prior to deploying to locations outside of the United States. However, our survey revealed that not all servicemembers are receiving this training. Specifically, as table 2 shows, at the seven installations where we administered our survey overseas, the percentage of servicemembers who indicated they had received training prior to deploying ranged from 52 to 90 percent, while the percentage of servicemembers indicating they had not received training prior to deploying ranged from 6 to 42 percent. Our interviews with individual servicemembers also revealed that not all servicemembers had received the required sexual assault prevention and response training prior to deploying. The SARC at one installation we visited told us that he believes many servicemembers are deploying overseas without having received the required predeployment training because too many servicemembers with whom he interacts are not familiar with the program. In some instances, we found that servicemembers may not be receiving the required training because DOD’s predeployment training requirements are not always enforced. For example, a general officer we met with in Iraq told us that as units are preparing to deploy commanders may not emphasize all predeployment training requirements, including those pertaining to sexual assault prevention and response. As a result, according to the general officer, servicemembers who have not received this training may not take sexual assault matters seriously during deployment. Such servicemembers may also not understand how to obtain assistance if a sexual assault were to occur. Though servicemembers may not receive required sexual assault prevention and response training prior to deploying, we found that some steps are being taken to provide the training to servicemembers once they arrive in a deployed location. For example, at the installations we visited overseas we found that SARCs and victim advocates were actively publicizing the program and providing training to servicemembers and units upon their arrival. The majority of respondents to our survey indicated that they had received required sexual assault prevention and response training and would know how to report a sexual assault using the restricted reporting option. However, as table 3 shows, the percentage of servicemembers we surveyed who indicated that they would not know or were not sure of how to report a sexual assault using the restricted reporting option, despite having received the training, ranged from 13 to 43 percent for the seven installations where we administered our survey in the United States and from 13 to 28 percent for the seven installations where we administered our survey overseas. Similarly, our interviews with servicemembers also revealed that some servicemembers who had received the required training were confused or unfamiliar with DOD’s options for reporting a sexual assault, as illustrated by the following examples: A junior enlisted servicemember at an Army installation told us that although he had received sexual assault prevention and response training as part of his annual training requirement, he did not believe that the Army allowed a report of sexual assault to be made without a formal investigation. A junior officer at an Air Force installation told us that his predeployment training covered sexual harassment and human trafficking but he was uncertain whether the training covered sexual assault matters or DOD’s reporting options. A senior enlisted servicemember in Iraq told us that while she received sexual assault prevention and response training prior to deploying, the training focused on how females could protect themselves and did not cover DOD’s reporting options. To help servicemembers understand the nature of sexual assaults, DOD’s instruction requires that sexual assault prevention and response training be scenario-based, using real-life situations to demonstrate the entire cycle of reporting, response, and accountability procedures. DOD’s instruction also requires that training for junior servicemembers include group participation and interaction. However, our survey revealed that the military services are not consistently meeting DOD’s requirements for the format of the training. During the course of our review, we found that the services are utilizing a variety of formats, including instructor-led or computer- or web-based training, to provide servicemembers with required sexual assault prevention and response training. However, as table 4 shows, at 9 of the 14 locations where we administered our survey, more than half of the servicemembers indicated that the training they received did not include a participatory or scenario-based component. The Coast Guard’s instruction does not specify any requirements for the format of its sexual assault prevention and response training. However, according to an official in the Coast Guard’s Office of Work Life, the Coast Guard is in the process of reviewing its training requirements, including those for the format of the training. Further, the Coast Guard is considering establishing a requirement that sexual assault prevention and response training be interactive. At the installations we visited, servicemembers’ perceptions of the required training they received varied. For example, one junior enlisted servicemember told us the training she received was very helpful and covered everything she would want to know about sexual assault matters, including the meaning of consent, the difference between sexual assault and sexual harassment, what one should do if sexually assaulted, and the differences between DOD’s restricted and unrestricted reporting options. However, at about half the installations we visited, servicemembers and program officials told us the training relied heavily on power point briefings and some said that participants were not engaged. Victim advocates, who along with SARCs provide the required training, told us at one installation we visited that the training they provide to units does rely heavily on power point briefings, the material is not engaging, and many servicemembers do not pay attention during the training sessions. At more than half the installations we visited, servicemembers and program officials we spoke with told us that the training is not taken seriously and some described it as a vehicle for units to “check the box” indicating that they met the training requirement. Servicemembers and program officials also told us that the training provided to junior personnel is not always interactive. Several servicemembers told us that junior servicemembers receive the same training as senior enlisted members and officers. The Deputy Commander at an Army installation we visited overseas described this training as aimed at a very broad spectrum of servicemembers and ranks and not very in-depth. Similarly, a senior enlisted servicemember at a Navy installation told us that the training she has attended includes both junior and senior servicemembers, which can be intimidating for junior servicemembers, who are consequently less likely to speak up or ask questions. The SARC at one installation we visited said that the training she provides units encompasses about 800 personnel at a time, which can make it difficult to allow for interaction or individual questions from any of the participants. DOD and the Coast Guard both require that sexual assault victims be made aware of the available mental health services. However, several factors, including a DOD-reported shortage of mental health care providers, difficulty in accessing mental health services overseas or in geographically remote locations, and servicemembers’ perceptions of stigma associated with seeking mental health care, affect servicemembers’ access to mental health care, and we could find no indication that DOD or the Coast Guard have performed an analysis to aid in addressing barriers to mental health care specifically for victims of sexual assault. To their credit, both DOD and the Coast Guard are taking steps to screen servicemembers for mental health disorders, such as post-traumatic stress disorder, which mental health providers in DOD with whom we spoke identified as one of the most common mental health concerns following a sexual assault. Both DOD and the Coast Guard require that servicemembers who are victims of sexual assault be made aware of mental health services, such as counseling. DOD’s instruction requires SARCs to coordinate medical and counseling services between military installations and deployed units for victims of sexual assault and collaborate with local community crisis counseling centers, as necessary, in order to augment or enhance DOD’s program. Similarly, the Coast Guard’s instruction requires that a health care professional ensure that any victim who reports a sexual assault be informed of his or her psychiatric care or counseling options. At the installations we visited, we found that program officials generally took steps to ensure that servicemembers who are sexually assaulted are made aware of their options for seeking mental health care and are able to access it, if desired. However, at some of the installations we visited we found instances in which program officials had not taken steps to ensure that servicemembers were aware of their options for seeking mental health care or otherwise had limited access to mental health care following a sexual assault. For example, at one installation we found that servicemembers had access only to the limited mental health services provided on base, and that the SARC did not know whether any memoranda of understanding were in place with local resources or practitioners in the community to provide servicemembers with additional options for accessing mental health care. With their SARC thus uninformed, any servicemember assaulted at the installation could be limited in his or her options for accessing mental health care if needed. Though both DOD and the Coast Guard require that servicemembers who are victims of sexual assault be made aware of mental health services, neither knows how many servicemembers have sought or received mental health services following a sexual assault because there is no requirement to collect or track such information. According to knowledgeable officials within DOD, doing so could be challenging because servicemembers may seek treatment from civilian providers who are not required to report any information to DOD. Both DOD and the Coast Guard collect limited information on the number of sexual assault victims who are referred for counseling. However, the information DOD collects is limited to the initial referral for each restricted report of a sexual assault; it does not indicate whether the victim actually received the counseling to which he or she was referred. Similarly, the information the Coast Guard collects is limited to only whether the victim was offered counseling. Officials at the Department of Veterans Affairs (VA) told us VA collects data on the cumulative number of veterans to whom it has provided mental health care for conditions resulting from military sexual trauma—which includes both sexual harassment and sexual assault—during their military career. However, because DOD and VA collect different data, comparisons cannot be made. Although DOD and the Coast Guard require that servicemembers who are victims of sexual assault be made aware of available mental health services, a DOD-reported shortage of health care providers—specifically mental health care providers—can hinder servicemembers’ access to receiving such care. Such concerns are not new to DOD. For example, in 2007, the congressionally mandated DOD Task Force on Mental Health reported that the military health system lacks the fiscal resources and fully trained personnel needed to fulfill its mission to support psychological health in peacetime or to fulfill the enhanced requirements imposed during times of conflict. During the course of our review we found that concerns over a shortage of mental health providers persist. For example, officials at some of the installations we visited told us that one barrier to ensuring that victims of sexual assault receive mental health care if they desire it is the lack of adequate resources and staff at some installations. Similarly, mental health officials with the Navy’s Bureau of Medicine and Surgery told us that the Navy does not have enough medical or mental health professionals to staff all allotted positions. However, during the course of our review we did find that the military services were taking steps to address this challenge. For example, DOD has established a memorandum of understanding with the Public Health Service to enable its uniformed providers to work in military treatment facilities. Servicemembers deployed overseas or based at geographically remote installations in the United States or overseas typically have more difficulty in obtaining mental health services because of inherent challenges associated with such locales. For example, servicemembers deployed to small forward operating bases in Afghanistan or Iraq may have to travel or be transported to other military installations in the region. Program officials told us that this process typically involves the servicemember notifying his or her commander, who is responsible for the use of transportation assets, which may require that the servicemember disclose the reason for seeking mental health care. However, disclosing that reason could jeopardize the member’s ability to use the restricted reporting option and keep the sexual assault incident confidential. Servicemembers stationed in geographically remote locations may also have limited access to mental health care. At one installation we visited in the United States, officials told us that they had faced challenges hiring additional mental health providers given the installation’s geographically remote location. However, they noted that recent hires of psychologists had reduced servicemembers’ waiting time for counseling appointments at the installation from about 7 weeks to 2 days. Navy and Coast Guard members told us that access to mental care on ships is limited and that servicemembers must wait until they can be transported to another ship with mental health assets, or until their ship arrives in port to access mental health services. Servicemembers also told us that it can be difficult to leave their ships in order to receive such care. Some mental health care officials we spoke with overseas said that the shortage of providers can make it even more difficult to seek mental health care, for any reason, when deployed overseas or in geographically remote locations in the United States or overseas. For example, at one installation we visited in Southwest Asia we found that servicemembers had access to only one mental health provider for only about 4 hours each week. In February 2008, the Army reported that such concerns remained largely unaddressed. Specifically, the Army’s Mental Health Advisory Team reported that in 2007, soldiers who were deployed in support of operations in Afghanistan and Iraq were reporting more difficulty in accessing behavioral health care than they had reported in 2006, and that behavioral health personnel were reporting a shortage of assets and an increase in burnout rates. Perceptions of stigma associated with seeking mental health care may also discourage servicemembers from doing so following a sexual assault. In 2007, the congressionally mandated DOD Task Force on Mental Health reported that stigma in the military associated with seeking mental health services remains a pervasive and critical barrier to accessing needed psychological care. Similarly, the Army’s Mental Health Advisory Team reported in 2008 that stigma continues to be a major issue in the willingness of servicemembers to seek care. DOD officials told us that servicemembers often do not seek mental health care for this reason or because they believe doing so could negatively impact their careers, such as by affecting their ability to obtain a security clearance. DOD recently took steps that may encourage servicemembers who require mental health care to seek professional help by successfully advocating a revision to Standard Form 86, Questionnaire for National Security Positions. Under the revision, applicants no longer need to disclose certain noncourt-ordered mental health care that they may have received in the preceding 7 years that was (1) strictly marital, family, or grief related, as long as it was not related to violence committed by the servicemember; or (2) strictly related to adjustments from service in a military combat environment. Further, in an April 2008 memorandum from the Secretary of Defense, DOD noted that professional care for mental health issues should not be perceived as jeopardizing an individual’s security clearance. However, officials with DOD’s Sexual Assault Prevention and Response Office told us that it is unclear whether these steps will encourage servicemembers who are victims of sexual assault to seek mental health care, or whether these revisions apply to servicemembers who have been sexually assaulted and seek mental health care. Mental health providers in DOD told us that post-traumatic stress disorder is one of the most common mental health concerns following a sexual assault. However, mental health officials told us that because the onset for post-traumatic stress disorder varies—a victim may develop post- traumatic stress disorder immediately, or it can be delayed—victims of sexual assault who seek care after the assault are treated for symptoms such as depression and anxiety at the time of their visit. Similarly, Coast Guard medical officials told us that the EAPC will ensure that a victim of sexual assault meets with a health care provider, who may evaluate and treat the patient for anxiety, depression, post-traumatic stress disorder, or other conditions, and refer the patient to the appropriate mental health specialist for acute and chronic care. DOD screens for mental health concerns, including post-traumatic stress disorder, as part of its system to assess the medical condition of servicemembers before and after deploying to locations outside the United States. The elements of DOD’s system include the use of a predeployment health assessment, a postdeployment health assessment, and a postdeployment health reassessment. During these assessments, a servicemember completes a form that includes questions used to screen for mental health concerns, including post-traumatic stress disorder, but not specifically for sexual assault. As we previously reported, post- traumatic stress disorder can develop following exposure to combat, natural disasters, terrorist incidents, serious accidents, or violent personal assaults like rape. Have you ever had any experience that was so frightening, horrible, or upsetting that, in the have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities or your surroundings? A health care provider reviews the completed form and may refer the servicemember for further evaluation if necessary. Further, within 30 days of redeployment, servicemembers are required to meet with a trained health care provider to discuss their responses to the postdeployment health assessment and any mental health or psychosocial issues that may be associated with the deployment. According to officials with DOD’s Sexual Assault Prevention and Response Office, a minimum of 4 weeks is needed to diagnose post-traumatic stress disorder, differentiating it from acute stress. In the Coast Guard, officials told us that EAPCs are responsible for informing sexual assault victims of their psychiatric care or counseling options. During such meetings, health care providers screen and treat servicemembers for mental health disorders, including post-traumatic stress disorder, and refer them to mental health specialists for additional acute or chronic care as appropriate. We found, based on responses to our nongeneralizable survey and a 2006 DOD survey, the most recent available, that occurrences of sexual assault may be exceeding the rates being reported, suggesting that DOD and the Coast Guard have only limited visibility over the incidence of these occurrences. We recognize that the precise number of sexual assaults involving servicemembers is not possible to determine, and that studies suggest sexual assaults are generally underreported in the United States. Nevertheless, our findings indicate that some servicemembers may choose not to report sexual assault incidents for a variety of reasons including the belief that nothing would be done or that reporting an incident would negatively impact their careers. In fiscal year 2007, DOD received 2,688 reports of alleged sexual assault made with either the restricted or unrestricted reporting option involving servicemembers as either the alleged offenders or victims. The Coast Guard, which did not offer the restricted reporting option during fiscal year 2007, received 72 reports of alleged sexual assault made using the unrestricted reporting option during this same time period. For additional information on reported sexual assault incidents in DOD and the Coast Guard for fiscal year 2007, see appendix V. At the 14 installations where we administered our survey, 103 servicemembers indicated that they had been sexually assaulted within the preceding 12 months, as shown in table 5. Of these, the number of servicemembers who indicated that they had been sexually assaulted within the preceding 12 months ranged from 3 to 11 per installation. Due to the transient nature of servicemembers, the alleged sexual assaults may not have occurred at the locations where we administered our survey. Of the 103 servicemembers who responded to our survey indicating that they had been sexually assaulted within the preceding 12 months, 52 indicated that they did not report the sexual assault incident. The number who indicated they did not report the sexual assault ranged from 1 to 6 servicemembers per installation. Table 6 provides information on the number of respondents to our survey who reported experiencing a sexual assault within the preceding 12 months. Servicemembers also told us that they were aware of alleged sexual assault incidents involving other servicemembers that were not reported to program officials. DOD’s fiscal year 2007 annual report and a Coast Guard program official with whom we spoke further support the view that servicemembers are not reporting all sexual assault incidents, as does the Defense Manpower Data Center’s 2006 Gender Relations Survey of Active Duty Members administered between June and September 2006. Issued in March 2008, the Defense Manpower Data Center survey found that of the estimated 6.8 percent of women and 1.8 percent of men in DOD who experienced unwanted sexual contact during the prior 12 months, the majority (an estimated 79 percent of women and 78 percent of men) chose not to report it. The Defense Manpower Data Center did not include data for the Coast Guard in its report. However, at our request, the Defense Manpower Data Center provided information on the Coast Guard that shows that an estimated 3 percent of female and 1 percent of male respondents reported experiencing unwanted sexual contact during the prior 12 months. According to a Defense Manpower Data Center official, additional information about respondents in the Coast Guard who chose not to report experiences about unwanted sexual contact is not available because the number of Coast Guard members who indicated they experienced unwanted sexual contact is very low and unreliable due to high margins of error. Earlier surveys conducted by some of the military services also indicated that servicemembers may not have been reporting all incidents of sexual assault. The Navy conducted a survey of its members in 2005 to assess, among other things, the likelihood that servicemembers would report a sexual assault incident to Navy authorities, and while the majority of both enlisted members and officers responding indicated they would report a sexual assault if they were the victim, as many as an estimated 10 percent of enlisted sailors and 10 percent of officers responded that they were unlikely to do so. Similarly, a voluntary nonprobability survey conducted by the Naval Inspector General in 2004 through early 2005 determined that 57 percent of victims who were sexually assaulted at some point in their Navy career did not report the incident. Further, the Army noted as part of DOD’s fiscal year 2007 annual report that recent Army survey data, which are not generalizable, indicate that as many as 70 percent of female soldiers who said they had experienced a sexual assault within the previous 12 months had not reported the incident. While the survey results suggest a disparity between the actual number of sexual assault incidents and the number of those reported, this is largely an expected result of anonymous surveys. Whereas formal reports, whether restricted or unrestricted, involve some level of personal identification and therefore a certain amount of risk on the part of the victim, the risks and incentives for servicemembers making anonymous reports are very different. Hence, anonymous survey results tend to produce higher numbers of reported incidents. Another factor obscuring the visibility that DOD and Coast Guard officials can have over the incidence of sexual assault is the fact that many of the individuals to whom the assaults may be reported, including clergy and civilian victim care organizations, civilian friends, or family, are not required to disclose these incidents. As a result, while DOD and the Coast Guard strive to capture an accurate picture of the incidence of sexual assault, their ability is necessarily limited. Our survey data revealed a number of reasons why servicemembers who experienced a sexual assault during the preceding 12 months did not report the incident. Commonly cited reasons by survey respondents at the installations we visited included: (1) the belief that nothing would be done; (2) fear of ostracism, harassment, or ridicule by peers; and (3) the belief that their peers would gossip about the incident. Survey respondents also commented that they would not report a sexual assault because of concern about being disciplined for collateral misconduct, such as drinking when not permitted to do so; not knowing to whom to make a report; concern that a restricted report would not remain confidential; the belief that an incident was not serious enough to report; or concern that reporting an incident would negatively impact their career or unit morale. The following are some examples of comments from survey respondents: A senior enlisted female commented that “many individuals do not come forward in the military out of fear of punishment because they have done something (i.e., drinking) that they could also get in trouble for.” A senior enlisted female commented that “most females are afraid to say anything to anyone in their chain of command because that person will go back and tell everyone in this command and it will get around to the whole entire unit as well as Brigade.” A junior enlisted male commented that “some servicemembers might feel like there is no point in telling anyone, especially if that person is higher rank than you because they might believe the higher ranking person would be believed over the lower ranking person.” A senior enlisted male commented that “peer pressure and embarrassment is a huge factor in why sexual assault is not always reported.” A male servicemember commented that he did not believe a sexual assault incident he experienced was “serious or offensive enough” to warrant reporting. A junior enlisted male commented that “just because a member of the service might have all the resources they need to report an assault without fear of reprisal doesn’t mean that all of them . I believe many are afraid public, making the victim seem . . . loose with their sexual actions.” Several servicemembers observed that reporting a sexual assault is perceived as something that can ruin a reputation. One junior enlisted female commented that “there are a lot of females who feel that an issue like sexual assault can ruin their reputation with other male soldiers or their unit.” The 2006 Gender Relations Survey of Active Duty Members identified similar reasons why servicemembers did not report unwanted sexual contact, including concern that reporting an incident could result in denial of promotions and professional and social retaliation. However, servicemembers also reported favorable results after reporting unwanted sexual contact to military authorities, including being offered counseling and advocacy services, medical and forensic services, legal services, and action being taken against alleged offenders. Respondents to our survey indicated they were supportive of the restricted reporting option as well. For example: A junior enlisted female observed that in her opinion servicemembers will be more likely to report an incident anonymously, commenting “I’m glad the options are there.” A female senior officer commented that “giving the victim a choice of making a or report is a positive change and allows that person the level of privacy they require.” A male senior officer observed that as awareness of SARCs increases, there has been a corresponding increase in reporting, commenting that he believes “word is getting out and reports are beginning to filter in, troops seem to be gaining confidence to report incidents.” While DOD and the Coast Guard have established some mechanisms for overseeing reports of sexual assaults involving servicemembers, they lack an oversight framework, and DOD lacks key information from the services needed to evaluate the effectiveness of the department’s sexual assault prevention and response program. Also, DOD and the Coast Guard lack an oversight framework because they have not established a comprehensive plan that includes such things as clear objectives, milestones, performance measures, and criteria for measuring progress, nor have they established evaluative performance measures with clearly defined data elements with which to analyze sexual assault incident data. DOD and the military services provide information on reports of alleged sexual assaults annually to Congress in accordance with statutory requirements, but the effectiveness of these reports for informing Congress about incidents of sexual assault in the military services is limited by DOD’s lack of a consistent methodology for reporting incidents, and the means of presentation for some of the data is misleading. Further, DOD lacks access to data needed to conduct comprehensive cross-service analyses over time. Finally, the congressionally directed Defense Task Force on Sexual Assault in the Military Services has yet to begin its review, although DOD considers its work to be an important oversight element. Without an oversight framework, as well as more complete data, decision makers in DOD, the Coast Guard, and Congress lack information they need to evaluate and oversee the programs. DOD’s instruction establishes oversight mechanisms for the department’s sexual assault prevention and response program and assigns oversight responsibility to DOD’s Sexual Assault Prevention and Response Office (within the Office of the Deputy Under Secretary of Defense for Plans). DOD’s Sexual Assault Prevention and Response Office is responsible for: developing programs, policies, and training standards for the prevention, reporting, response, and program accountability of sexual assaults involving servicemembers; developing strategic program guidance and joint planning objectives; collecting and maintaining sexual assault data; establishing institutional evaluation, quality improvement, and oversight mechanisms to periodically evaluate the effectiveness of the department’s program; assisting with identifying and managing trends; and preparing the department’s annual report to congress. To help oversee implementation of its sexual assault prevention and response program, in 2006 DOD established a Sexual Assault Advisory Council comprised of representatives from DOD’s Sexual Assault Prevention and Response Office, the military services, and the Coast Guard. The Sexual Assault Advisory Council’s responsibilities include advising the Secretary of Defense on the department’s sexual assault prevention and response policies, coordinating and reviewing the department’s policies and program, and monitoring progress. During the course of our review, the Sexual Assault Advisory Council began to develop preliminary baseline performance measures and evaluation criteria for assessing program implementation. However, DOD has not yet established time frames for completing and implementing these measures. DOD is also working with the military services to develop standards to assess program implementation and response during site visits planned for 2008. While the military services have developed mechanisms to collect data, efforts to systematically review and assess implementation of their programs are limited and vary by military service. The following are examples of what we found: The Army, in response to recommendations made by its Inspector General, has developed a plan that includes specific actions to be taken and time frames for completion to improve its program. In addition, the Army has developed and implemented a Sexual Assault Data Management System to track reported incidents and associated demographic information about victims and alleged offenders. The Navy is reviewing sexual assault incident reports received from Navy installations, and program officials told us they proactively contact installations that have not reported any sexual assault incidents during the reporting period. Further, each installation’s Fleet and Family Support Center conducts accreditation visits every 3 years to provide quality assurance and identify and resolve potential problems. For example, they have found that some servicemembers may not be aware of the reporting options and, in the past, some commands had not supported the program. While the Navy has not yet developed a database to track sexual assault incident data, program officials told us they plan to do so before the end of fiscal year 2008. Commanders in the Marine Corps use commanders’ protocols for responding to allegations of sexual assault to ensure they are accomplishing the intent of the program without overlooking any aspects. Further, the Marine Corps uses its Automated Inspection Reporting System to assess management and administration of the program at the installation level. Program officials in the Air Force told us they rely on SARCs to proactively provide feedback about the program through their chain of command and during monthly teleconferences. While Air Force officials acknowledge that they have not conducted either official or formal institutional reviews or assessments of the Air Force’s program, they have asked the Air Force’s Inspector General to review its first responder training and other aspects of the program to ensure compliance with DOD’s policy. The Air Force collects and maintains information about reported sexual assault incidents using multiple databases. Though DOD has established some oversight mechanisms, it has not established an oversight framework, which is necessary to ensure the effective implementation of its sexual assault prevention and response program. Our prior work has demonstrated the importance of outcome- oriented performance measures to successful program oversight and shown that having an effective plan for implementing initiatives and measuring progress can help decision makers determine whether initiatives are achieving their desired results. DOD has not established an oversight framework because it has not established a comprehensive plan that includes such things as clear objectives, milestones, performance measures, and criteria for measuring progress, nor has it established evaluative performance measures with clearly defined data elements with which to analyze sexual assault incident data. Because DOD’s sexual assault prevention and response program lacks an oversight framework, its program, as currently implemented, does not provide decision makers with the information they need to evaluate the effectiveness of the program, determine the extent to which the program is helping to prevent sexual assault from occurring, or ensure that servicemembers who are victims of sexual assault receive the care they need. As discussed above, DOD’s directive assigns oversight responsibility to DOD’s Sexual Assault Prevention and Response Office. However, this office has yet to establish metrics to facilitate program evaluation and assess effectiveness. For example, it has not developed specific metrics to: determine the frequency with which victims were precluded from making a confidential report using the restricted reporting option or reasons that precluded them from doing so; or track information on whether units have received required annual sexual assault prevention and response training, including how many servicemembers within a unit have received the training. Additionally, DOD’s Sexual Assault Prevention and Response Office has yet to establish performance goals—for example, a goal specifying the percentage of servicemembers within a unit who should receive required training. In the absence of such measures, Sexual Assault Prevention and Response Office officials told us that they currently determine the effectiveness of DOD’s program based on how well the military services are complying with program implementation requirements identified by DOD. While they acknowledged that to date their focus has been on program implementation as opposed to program evaluation, these officials noted that the Sexual Assault Advisory Council is in the initial stages of developing performance measures and evaluation criteria to assess program performance and identify conditions needing attention. Presently, DOD is working with the military services to develop guidelines to permit, among other uses, consistent assessment of program implementation during site visits conducted by DOD’s Sexual Assault Prevention and Response Office as well as by the military services at other times. However, time frames for developing and implementing these measures have not yet been established, and without such a plan and evaluative measures, the program does not provide decision makers with the information they need to evaluate the effectiveness and efficiency of the military services’ efforts. Without an oversight framework to guide program implementation, DOD risks that the military services will not collect all of the information needed to provide insight into the effectiveness of their programs. For example, officials have recognized that they will need additional data on sexual assault incidents both for purposes of oversight and to respond effectively to congressional inquiries as the program matures. However, the military services have encountered challenges in providing requested data because the request came after the start of the collection period. For example, with the exception of the Army, none of the military services was able to provide data as part of the fiscal year 2007 annual report to Congress on sexual assaults involving civilian victims, such as contractors and government employees. Without an oversight framework that includes clearly defined data collection elements, DOD’s Sexual Assault Prevention and Response Office risks not being able to respond effectively to congressional requests or to oversee the program appropriately. Oversight by the Coast Guard headquarters of its sexual assault prevention and response program is limited to the collection and maintenance of incident data and, like DOD, the Coast Guard has not established an oversight framework to guide implementation of its program. Although the Coast Guard recently revised its instruction to incorporate a restricted reporting option and to generally mirror DOD’s sexual assault prevention and response program, according to Coast Guard officials their focus to date has been on program implementation as opposed to program evaluation. Like DOD, the Coast Guard has not developed an oversight framework that includes clear objectives, milestones, performance measures, and criteria for measuring progress, nor has the Coast Guard developed performance measures to assess its program. Coast Guard program officials told us that they plan to conduct reviews of their program for compliance and quality in the future and will continue to review reported incident data, and they plan to leverage any metrics developed by DOD to assess their program. Further, the Coast Guard Investigative Service has begun to conduct limited trend analysis on reported incidents, including the extent to which alcohol or drugs were involved in alleged sexual assaults. However, like DOD, the Coast Guard is not able to fully evaluate the results achieved by its efforts, and it is unclear whether its program is achieving its goals. While there is no statutory reporting requirement for the Coast Guard, the Coast Guard voluntarily participates in DOD’s annual reporting requirement by submitting data to DOD’s Sexual Assault Prevention and Response Office. The Coast Guard Investigative Service collects data on unrestricted reports as part of its investigative responsibilities and shares these data with the Coast Guard Office of Work Life, which collects data on alleged assaults received using the restricted reporting option. The Coast Guard shares aggregate reported data with DOD’s Sexual Assault Prevention and Response Office. However, DOD does not include these data in its annual report and the Coast Guard does not provide these incident data to Congress because neither is required to do so. As a result, Congress does not have visibility over the extent to which sexual assaults involving Coast Guard members occur. DOD’s annual reports to Congress may not effectively characterize incidents of sexual assault in the military services because the department has not clearly articulated a consistent methodology for reporting incidents, and because the means of presentation for some of the data does not facilitate comparison. DOD’s annual reports to Congress include data on the total number of restricted and unrestricted reported incidents of sexual assault; however, meaningful comparisons of the data cannot be made because the respective offices that provide the data to DOD measure incidents of sexual assault differently. For example, in the military services, SARCs, who focus on victim care, report data on the number of sexual assault incidents alleged using the restricted reporting option based on the number of victims involved. In contrast, the criminal investigative organizations, which report data on the number of sexual assault incidents alleged using the unrestricted reporting option, report data on a per “incident” basis, which may include multiple victims or alleged offenders. Thus, the lack of a common means of presentation for reporting purposes has prevented users of the reports from making meaningful comparisons or drawing conclusions from the reported numbers. Further, while we identified some improvements in the fiscal year 2007 report in the way DOD discusses some data, DOD’s annual report lacks certain data that we believe would facilitate congressional oversight or understanding of victims’ use of the reporting options. For example, while DOD’s annual report provides Congress with the aggregate numbers of investigations during the prior year for which commanders did not take action against alleged offenders, those aggregated numbers do not distinguish between cases in which evidence was found to be insufficient to substantiate an alleged assault versus the number of times a victim recanted an accusation or an alleged offender died. Also, though DOD’s annual report documents the number of reports that were initially brought using the restricted reporting option and later changed to unrestricted, DOD’s annual report includes these same figures in both categories—that is, the total number of restricted reports and the total number of unrestricted reports. An official in DOD’s Sexual Assault Prevention and Response Office told us that because the military services do not provide detailed case data to DOD, the department is not able to remove these reports from the total number of restricted reports when providing information in its annual report. However, we believe that double listing the figures is confusing. Also, while DOD’s Sexual Assault Prevention and Response Office has collected and reported incident data since calendar year 2004, the department lacks a baseline for conducting trend analysis over time because of changes in the way data are reported. Comparisons among data reported during calendar years 2005 and 2006 are difficult to make because the restricted reporting option was not available during the entirety of calendar year 2005. Significantly, direct comparisons cannot be made between fiscal year 2007 and prior years because of inconsistencies in the reporting periods. For example, changes to sections of the UCMJ dealing with sexual assault that took effect on October 1, 2007, led DOD to change the period of data collection from calendar year to fiscal year. Consequently, incident data reported in DOD’s calendar year 2006 annual report to Congress overlap with data reported in its fiscal year 2007 annual report for the months of October, November, and December 2006. However, because the military services provide incident data to DOD that are aggregated for each service, Sexual Assault Prevention and Response Office officials told us they cannot adjust the calendar year data to a fiscal year basis to facilitate trend analysis. Officials noted that each military service would need to manually adjust previously reported calendar year data to a fiscal year basis, and such an undertaking would be time intensive. Moreover, a Sexual Assault Prevention and Response Office official told us that these changes, which led DOD to revise the standard definition of sexual assault, will also prevent comparisons between fiscal year 2007 data and data in future years, except in general terms. Consequently, the way in which sexual assault incident data are collected and reported will change in the fiscal year 2008 annual report to Congress and, until investigations of sexual assault incidents reported prior to fiscal year 2008 are completed, both DOD’s original and revised standard definitions of sexual assault will be in use. Finally, DOD has not conducted its own analysis of the information contained in the military services’ annual reports or provided its assessment of their programs prior to forwarding these reports to Congress, in part because it is not explicitly required to include this type of assessment in its annual report. Without a firm baseline and consistent data collection, DOD will not be able to conduct trend analysis over time that provides insight into incident data collected, except in the most general terms. DOD’s Sexual Assault Prevention and Response Office is not able to conduct comprehensive cross-service trend analysis of sexual assault incidents because it does not have access to installation- or case-level data that would facilitate such analyses. DOD officials told us that the military services do not provide installation- or case-level incident data beyond those that are aggregated at the military-service level. These data are generally limited to information needed to meet statutory requirements for inclusion in the annual report to Congress. Service officials told us they do not want to provide installation- or case-level data to DOD because they are concerned that (1) data may be misinterpreted, (2) even nonidentifying data about the victim may erode victim confidentiality, and (3) servicemembers may not report sexual assaults if case-level data are shared beyond the service-level. However, without access to such information, DOD does not have the means to identify those factors, and thus to fully execute its oversight role, including assessing trends over time. For example, without case-level data, DOD cannot determine the frequency with which sexual assaults are reported in each of the geographic combatant commands. Since 2004, DOD has required the Joint Staff to provide periodic information on sexual assaults reported in the U.S. Central Command’s area of responsibility because of the significant impact sexual assault has on mission readiness. However, DOD does not know the rate of reported sexual assault incidents in U.S. Central Command’s area of responsibility as compared with the rate in other geographic combatant commands, because the department does not require the Joint Staff to provide such information. Furthermore, installation- and case-level data may be useful to identify installations that over periods of time continue to have high rates of reported alleged sexual assault incidents as a percentage of the total population. For example, in analyzing the services’ installation-level data we identified, for one of the military services, three installations with higher reporting rates for sexual assaults than other installations. Continuation of such trends at these installations over time could indicate best practices, such as supportive command climates, that encourage victims to report sexual assaults. Conversely, such trends may identify installations or units where additional training and resources to prevent sexual assaults may be needed. Such information, if available, could better inform decisions by officials in DOD’s Sexual Assault Prevention and Response Office to select installations within each service to visit for program assessments and identify factors to consider when making programmatic corrections. To provide further oversight of DOD’s sexual assault prevention and response program, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 required the Defense Task Force on Sexual Assault in the Military Services to conduct an examination of matters relating to sexual assault in cases in which members of the Armed Forces are either victims or offenders. As part of its examination, the law directs the task force to assess, among other things, DOD’s reporting procedures, collection, tracking, and use of data on sexual assaults by senior military and civilian leaders, as well as DOD’s oversight of its sexual assault prevention and response program. The law does not require an assessment of the Coast Guard’s program. Senior officials within the Office of the Under Secretary of Defense for Personnel and Readiness have stated that they plan to use the task force’s findings to evaluate the effectiveness of DOD’s sexual assault prevention and response program. However, as of July 2008 this task force has yet to begun its review. Senior task force staff members we spoke with attributed the delays to challenges in appointing the task force members and member turnover. As of July 2008, however, they told us that all 12 task force members had been appointed, and that their goal was to hold their first open meeting, and thus begin their evaluation, in August 2008. They also told us that they estimate that by the end of fiscal year 2008 DOD will have expended about $15 million since 2005 to fund the task forces’ operations. According to senior task force staff members, much of this funding has gone towards the task forces’ operational expenses, including salaries for the civilian staff members, contracts, travel, and rent. The law directs that the task force submit its report to the Secretary of Defense and the Secretaries of the Army, Navy, and Air Force no later than 1 year after beginning its examination. If such a goal were met, the task force’s evaluation could be complete by August 2009. However, as of the time of our review, it was uncertain whether the task force will be able to meet this goal. DOD and the Coast Guard have taken positive steps to prevent, respond to, and resolve reported incidents of sexual assault. However, a number of challenges—such as limited guidance for implementing DOD’s policies in certain environments, some commanders’ support and limited resources for the programs, training that is not consistently effective, and limited access to mental health services—could undermine the effectiveness of some of their efforts. Left unchecked, these challenges could undermine DOD’s and the Coast Guard’s efforts by eroding servicemembers’ confidence in the programs or decreasing the likelihood that sexual assault victims will turn to the programs for help when needed. Also, although DOD and the Coast Guard have established some oversight mechanisms, without an oversight framework with specific goals, measures, and milestones for assessing results, DOD and the Coast Guard are limited in their ability to measure the success of their efforts. Further, without information on the incidence of sexual assault in the Coast Guard, Congress’ visibility over the extent to which sexual assaults involving Coast Guard members occur is limited. Finally, without a firm baseline and consistent data collection, DOD will not be able to conduct trend analysis over time that enables it to determine where its program is working and where it is not, and therefore may have difficulty judging the overall successes, challenges, and lessons learned from its program. As a result, congressional decision makers may not have the visibility they need of the incidence of sexual assault reports involving servicemembers. To improve oversight of sexual assault incidents involving servicemembers in the Coast Guard, Congress may wish to consider requiring the Coast Guard to submit to Congress sexual assault incident and program data annually that are methodologically comparable to those required of DOD. We recommend that the Secretary of Defense take the following nine actions: To improve implementation of the sexual assault prevention and response program in DOD, direct the Under Secretary of Defense for Personnel and Readiness to: Review and evaluate the department’s policies for the prevention and response of sexual assault to ensure that adequate guidance is provided to effectively implement the program in deployed environments and joint environments. Evaluate the military services’ processes for staffing and designating key installation-level program positions, such as SARCs, at installations in the United States and overseas, to ensure that these individuals have the ability and resources to fully carry out their responsibilities. Review and evaluate sexual assault prevention and response training to ensure the military services are meeting training requirements and to enhance the effectiveness of the training. Systematically evaluate and develop an action plan to address any factors that may prevent or discourage servicemembers from accessing mental health services following a sexual assault. To ensure that the sexual assault prevention and response program has the strong support of military commanders and other senior leaders necessary for implementation, direct the service secretaries to emphasize to all levels of command their responsibility for supporting the program, and review the extent to which commanders support the program and resources are available to raise servicemembers’ awareness of sexual assault matters. To enhance oversight of the sexual assault prevention and response program in DOD, direct the Under Secretary of Defense for Personnel and Readiness to: Require the Sexual Assault Prevention and Response Office to develop an oversight framework to guide continued program implementation and evaluate program effectiveness. At a minimum, such a framework should contain long-term goals, objectives, and milestones; performance goals; strategies to be used to accomplish goals; and criteria for measuring progress. Improve the usefulness of the department’s annual report as an oversight tool both internally and for congressional decision makers by establishing baseline data to permit analysis of data over time, and reporting data so as to distinguish cases in which (1) evidence was insufficient to substantiate an alleged assault, (2) a victim recanted, or (3) the allegations of sexual assault were unfounded. To enhance oversight of the military services’ sexual assault prevention and response programs, direct the service secretaries to provide installation-level incident data to the Sexual Assault Prevention and Response Office annually or as requested to facilitate analysis of sexual assault-related data and better target resources over time. To help facilitate the assessment and evaluation of DOD’s sexual assault prevention and response program, direct the Defense Task Force on Sexual Assault in the Military Services to begin its examination immediately, now that all members of the task force have been appointed, and to develop a detailed plan with milestones to guide its work. We recommend that the Commandant of the Coast Guard, in order to improve implementation and enhance oversight of the Coast Guard’s sexual assault prevention and response program, take the following two actions: Evaluate its processes for staffing key installation-level program positions, such as the EAPC, to ensure that these individuals have the ability and resources to fully carry out their responsibilities. Develop an oversight framework to guide continued program implementation and evaluate program effectiveness. At a minimum, such a framework should contain long-term goals, objectives, and milestones; performance goals; strategies to be used to accomplish goals; and criteria for measuring progress. In written comments on a draft of this report, both DOD and the Coast Guard concurred with all of our recommendations. DOD’s comments are reprinted in appendix II, and the Coast Guard’s comments are reprinted in appendix III. The Coast Guard also provided technical comments which we incorporated where appropriate. In concurring with our first recommendation, that the department should review and evaluate its policies for the prevention of and response to sexual assault to ensure that adequate guidance is provided to effectively implement the program in deployed and joint environments, DOD asserted that it had originally brought this issue to our attention. We disagree with DOD’s characterization of this issue. As noted in our report, program officials with whom we met overseas informed us of their concerns that DOD’s guidance does not address some important issues—such as how to implement the program when operating in a deployed environment. In some instances, the military services also informed us of their concerns over the adequacy of DOD’s guidance. However, officials with DOD’s Sexual Assault Prevention and Response Office did not express any such concerns to us during the course of our review, nor did they indicate that they were taking any actions to address them. Nonetheless, DOD in its written comments cited several positive actions it is taking take to meet the intent of our recommendation, such as its use of Policy Assistance Team visits to ensure that all challenges have been identified. In concurring with our recommendations aimed at improving implementation of the department’s sexual assault prevention and response program—including that DOD should (1) evaluate the military services’ processes for staffing key installation-level program positions, (2) review and evaluate sexual assault prevention and response training; and (3) systematically evaluate and develop an action plan to address any factors that may prevent or discourage servicemembers from accessing mental health services following a sexual assault—DOD commented that several efforts are currently underway or are planned to address these issues. For example, DOD stated that the department is currently utilizing Policy Assistance Team site visits to evaluate the effectiveness of SARCs as implemented by each of the military services, and to elicit feedback from servicemembers about training content, frequency, media, and effectiveness. We commend DOD for taking immediate steps in response to our recommendations, such as including a review of the military services’ implementation of the SARC position and training as part of its Policy Assistance Team site visits. As DOD noted in its comments, additional efforts to address our recommendations are planned. We believe it is important for the department to continue to emphasize taking positive actions with regard to our recommendations. In its concurrence with our recommendation that the department should emphasize to all levels of command their responsibility for supporting the program and should review the extent to which support and resources are available to raise servicemembers’ awareness of sexual assault matters, DOD noted that a letter from the Secretary of Defense is currently in draft for dissemination to the service secretaries emphasizing commander involvement and support for the program. DOD further noted that it will examine whether there is a need to update commanders’ training to enhance their understanding and support of the program. However, DOD offered no specific information with regard to the steps it will take to review the extent to which commanders support the program and resources are available to raise servicemembers’ awareness of sexual assault matters. We continue to believe that conducting such an assessment is critical to understanding the extent to which commanders actually support the program. In its concurrence with our recommendations for enhancing oversight of the department’s sexual assault prevention and response program— including that DOD should (1) develop an oversight framework to guide continued program implementation and evaluate program effectiveness, and (2) establish baseline data to facilitate analysis of data over time— DOD noted that it had established its sexual assault prevention and response program very rapidly to meet an emergent need, but that now that the program is established it must transition to a mature program with long-term goals, objectives, milestones, and criteria for measuring progress. We commend the department for committing to develop an oversight framework for its program. In its concurrence with our recommendation that DOD should direct the military services to provide installation-level incident data to the Sexual Assault Prevention and Response Office annually or as requested, DOD noted that U.S. Central Command, the Army, the Navy, and the Air Force all have expressed concerns regarding the reporting of this installation- level data. However, DOD also acknowledged—as we note in our report— that access to installation-level data by DOD’s Sexual Assault Prevention and Response Office is critical for oversight and visibility over alleged sexual assault incidents and stated it is drafting a letter for the Secretary of Defense’s signature ordering the military services to provide installation-level data to DOD’s Sexual Assault Prevention and Response Office. Finally, in its concurrence with our recommendation that the department should direct the Defense Task Force on Sexual Assault in the Military Services to begin its examination immediately and to develop a detailed plan with milestones to guide its work, DOD noted that the task force’s first meeting was held during mid-August 2008. Further, DOD provided additional information on the steps the task force plans to take to assess DOD’s program as part of its evaluation, including conducting site visits and meeting with servicemembers and first responders. However, DOD provided no information regarding the milestones that will guide the task force’s work. We continue to assert the importance of this key element of the plan the task force needs. The Coast Guard also concurred with our recommendations aimed at improving implementation and enhancing oversight of its sexual assault prevention and response program—including that the Coast Guard should (1) evaluate its process for staffing key installation-level program positions and (2) develop an oversight framework to guide continued program implementation and evaluate program effectiveness. We commend the Coast Guard for its planned initiatives, including ensuring that program experts have the resources to fully conduct their duties and responsibilities, working with DOD to align its goals, strategies, and measurements for consistency and improved reporting, and seeking to coordinate an integrated approach and programmatic view to improve its program. However, we note that as part of these efforts it is important that the Coast Guard’s efforts include an oversight framework with long-term goals, objectives, milestones, and criteria for measuring progress. We are sending copies of this report to interested congressional members and staff; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Commandant of the Coast Guard. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. To determine the extent to which the Department of Defense (DOD) and the Coast Guard have developed and implemented policies and programs to prevent, respond to, and resolve sexual assault incidents involving servicemembers, we reviewed legislative requirements and obtained and analyzed DOD’s, the military services’, and the Coast Guard’s guidance and requirements for the prevention, response, and resolution of sexual assault. We also interviewed officials in DOD, the Army, the Air Force, the Navy, the Marine Corps, and the Department of Homeland Security and the Coast Guard to obtain a comprehensive understanding of their efforts to implement programs to prevent and respond to reported incidents of sexual assault. We also obtained and analyzed DOD’s annual reports to Congress for calendar years 2004, 2005, 2006, and fiscal year 2007 and compared the statutory requirements for DOD’s annual report to Congress to the data included in the annual reports. In addition, we visited 15 military installations in the United States and overseas where we met with program officials and responders to discuss their experiences preventing and responding to incidents of sexual assault and the challenges they face implementing sexual assault prevention and response programs. The locations we visited were selected based on a number of factors, including units’ mission, availability of personnel given training or mission requirements, and recent deployment histories. We focused our overseas efforts on military installations located in the U.S. Central Command’s area of responsibility because of past congressional concerns about the prevalence of sexual assaults in deployed locations and combat zones. At the installations we visited, we met with Sexual Assault Response Coordinators and victim advocates in DOD; Employee Assistance Program Coordinators in the Coast Guard; and judge advocates, medical and mental health personnel, criminal investigative personnel, law enforcement personnel, and chaplains in DOD and the Coast Guard. We also met with military commanders, including company and field grade officers, and senior enlisted servicemembers to discuss the steps they have taken to establish a command climate that discourages sexual assault from occurring, as well as their personal experiences responding to and resolving reported incidents of alleged sexual assault in their units. We also obtained servicemembers’ perspectives on issues regarding command support, training, and access to medical and mental health services by administering a nonprobability survey to selected servicemembers and conducting one-on-one structured interviews with servicemembers at 14 of the 15 installations we visited. To understand how commanders dispose of sexual assault cases, we reviewed the Uniform Code of Military Justice and Manual for Courts-Martial and reviewed data reported by DOD for fiscal year 2007. To obtain an understanding of the processes used to treat mental health disorders, we met with knowledgeable officials from DOD, the military services, the Coast Guard, and the Department of Veterans Affairs. To determine the extent to which DOD and the Coast Guard have visibility over reports of sexual assault involving servicemembers, we obtained and analyzed data for reported sexual assaults in both DOD and the Coast Guard for fiscal year 2007. To assess the reliability of the reported sexual assault data, we discussed these data with officials in DOD and the Coast Guard as well as with officials in the military services to gain an understanding of the processes and databases used to collect and record incident data, and to understand existing data quality control procedures and known limitations of the data. In comparing sexual assault data we received directly from DOD installation(s) with installation-level data we received from the services, we found some discrepancies that officials were not able to explain. Even with these discrepancies, we found these data were sufficiently reliable to present an overall description of reported incidents of sexual assault. While we also reviewed DOD’s annual reports to Congress for calendar years 2004, 2005, and 2006, changes in the way DOD collects and reports incident data preclude direct comparisons and analysis across calendar and fiscal years. To understand why servicemembers may not report sexual assault incidents, we obtained servicemembers’ perspectives on issues regarding sexual assault prevention and response programs in the military services and the Coast Guard through our survey and one-on-one structured interviews of servicemembers at 14 of the 15 installations we visited. We also reviewed the results of surveys conducted by DOD and the military services since 2004. In reviewing the survey documentation provided by the Defense Manpower Data Center and the Army and the Navy, we found these data were sufficiently reliable and we present these survey results to illustrate that multiple survey research sources indicate that there may be underreporting by those who experience sexual assaults. To determine the extent to which DOD and the Coast Guard exercise oversight over reports of sexual assault involving servicemembers, we interviewed key officials with DOD’s and the military services’ respective Sexual Assault Prevention and Response offices and the Coast Guard’s Office of Work Life to obtain a comprehensive understanding of the processes, procedures, and controls used for monitoring and overseeing the programs. We also interviewed representatives of the Defense Task Force on Sexual Assault in the Military Services, which is statutorily required to undertake an examination of sexual assault matters in the Armed Forces, to discuss the task force’s progress. We reviewed various pertinent documents, including meeting minutes for DOD’s Sexual Assault Advisory Council, federal internal control standards, and prior GAO reports on the use of performance measures to evaluate programmatic efforts. We also reviewed reports issued by the services’ inspector generals and examined DOD’s and the Coast Guard’s responses to recommendations from prior related studies. In addition, we analyzed installation-level data for reported sexual assaults in both DOD and the Coast Guard for fiscal year 2007. To obtain servicemembers’ perspectives on issues regarding sexual assault prevention and response programs in DOD and the Coast Guard, we administered a total of 3,750 confidential surveys to a nonprobability sample of randomly selected servicemembers and conducted more than 150 one-on-one, structured interviews with randomly selected servicemembers at 14 of the 15 locations we visited. In the United States, the locations we visited included Camp Lejeune, North Carolina; Fort Bliss, Texas; Fort Drum, New York; Integrated Support Command Portsmouth, Virginia; Lackland Air Force Base, Texas; Marine Corps Base Quantico, Virginia; and Naval Station Norfolk, Virginia. Overseas, the locations we visited included Al Udeid Air Base, Qatar; Balad Air Base, Iraq; Camp Arifjan, Kuwait; Camp As Saliyah, Qatar; Camp Ramadi, Iraq; Camp Stryker, Iraq; Logistics Support Area Anaconda, Iraq; and Naval Support Activity, Bahrain. We did not administer our survey or conduct one-on-one structured interviews at Camp As Saliyah at the request of the Army because many of the servicemembers stationed there are on rest and relaxation tours during their overseas deployment. Of the 3,750 confidential surveys we administered, 711 surveys were administered in Iraq; 852 in Kuwait, Qatar, and Bahrain collectively; and 2,187 at locations across the United States. We considered conducting surveys of servicemembers using probability samples that would allow generalizing the results to all servicemembers at each installation we visited. However, because of the difficulties in identifying accurate and complete lists of servicemembers present at an installation as of a specific date from which to draw samples, particularly for installations outside the United States, and the administrative burden it would have placed on the installation commands, we did not pursue this. Instead, we conducted nonprobability surveys with randomly selected servicemembers to reflect all ranks and both men and women at 14 of the 15 installations we visited. Table 7 provides information on the number of servicemembers we surveyed at each location. To select the participants for our surveys and one-on-one structured interviews, we requested that the locations we visited provide us with a list of available personnel. To the extent possible, we requested that this list not include personnel who were deployed, on temporary duty status, or otherwise not available to attend our survey sessions at the time of our visit. From the lists provided we randomly selected participants based on gender and rank. Participants were categorized according to the following ranks: junior enlisted (encompassing the ranks of E1-E4); mid-enlisted (encompassing the ranks of E5-E6); senior enlisted (encompassing the ranks of E7-E9); warrant officers and company grade officers (encompassing the ranks of W1-W5 and O1-O3); and field grade officers (encompassing the ranks of O4-O6). To ensure maximum participation by selected servicemembers, we provided the locations we visited with lists of primary and alternate selections for both the survey sessions and one-on-one structured interviews. Because of the sensitivity of the information we were seeking, we took several steps to help assure a confidential environment during our survey sessions. First, we did not document the names of participants in any of our sessions. Further, we surveyed participants separately based on rank and gender; for instance, junior enlisted men were surveyed separately, as were junior enlisted women. We used this same approach for mid- and senior enlisted servicemembers; warrant and company grade officers; and field grade officers. Finally, we had male GAO analysts survey male servicemembers and female GAO analysts survey female servicemembers. Similarly, in an attempt to encourage an open discussion during our one-on-one structured interviews, but still protect the confidentiality of the servicemembers, we did not document their names. Because we did not select survey and interview participants using a statistically representative sampling method, our survey results and the comments provided during our interview sessions are nongeneralizable and therefore cannot be projected across DOD, a service, or any single installation we visited. However, the survey results and comments provide insight into the command climate and implementation of sexual assault prevention and response program at each location at the time of our visit. To develop our survey questions, we reviewed several DOD surveys and studies of issues such as command climate and sexual harassment and sexual assault in the military. We also reviewed the military services’ and the Coast Guard’s policies and training materials for programs to prevent and respond to incidents of sexual assault. Because the scope of our review included focusing on military installations in the United States and the U.S. Central Command area of responsibility, we developed two survey questionnaires—the first focusing on the perspective of a servicemember stationed in the United States (see app. VI) and the second on that of a servicemember deployed outside the United States (see app. VII). We worked with social science survey specialists to develop our survey questionnaires. Because these were not sample surveys, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, in the sources of information that are available to respondents, or how the data are entered into a database can introduce unwanted variability into the survey results. We took steps in the development of the questionnaires, the data collection, and data analysis to minimize these nonsampling errors. For example, prior to administering the survey, we pretested the content and format of the questionnaire with servicemembers at Marine Corps Base Quantico, Virginia and Fort Meade, Maryland to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. We made changes to the content and format of our final questionnaires based on the results of our pretesting. We administered our surveys and conducted our one-on-one structured interviews at the locations we visited between September 2007 and March 2008. Because our surveys and questions asked participants to consider the frequency with which some things, such as training, have occurred over the past 12 months, participants’ responses may cover the period between September 2006 and March 2008. We visited or contacted the following organizations during our review: Defense Manpower Data Center, Arlington, Virginia Defense Task Force on Sexual Assault in the Military Services, Alexandria, Office of the Under Secretary of Defense for Personnel and Readiness Office of the Deputy Under Secretary of Defense for Plans, Washington, D.C. Sexual Assault Prevention and Response Program Office, Washington, D.C. Office of the Assistant Secretary of Defense for Health Affairs, Falls Defense Center of Excellence for Psychological Health and Traumatic Brain Injury, Rosslyn, Virginia U.S. Central Command, MacDill Air Force Base, Florida Office of the Chairman, Joint Chiefs of Staff J-1, Manpower and Personnel, Washington, D.C. J-4, Logistics, Washington, D.C. Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, Washington, D.C. Office of the Chief of Chaplains, Crystal City, Virginia Office of the Chief of Public Affairs, Washington, D.C. Office of the Deputy Chief of Staff, G-1 Personnel Sexual Assault Prevention and Response Program Office, Rosslyn, Office of the Inspector General, Washington, D.C. Office of the Judge Advocate General, Rosslyn, Virginia Office of the Provost Marshall General, Washington, D.C. Office of the Surgeon General, Fort Sam Houston, Texas Army Central Command, Fort McPherson, Georgia Army Combat Readiness Center, Fort Rucker, Virginia Army Criminal Investigation Command, Fort Belvoir, Virginia Army Family and Morale, Welfare and Recreation Command, Alexandria, Army Forces Command, Fort McPherson, Georgia Army Medical Command, Fort Sam Houston, Texas Army Research Institute, Arlington, Virginia Army Training and Doctrine Command, Fort Monroe, Virginia Army Military Police School, Fort Leonard Wood, Missouri Camp Arifjan, Kuwait Camp As Saliyah, Qatar Camp Stryker, Iraq Fort Bliss, Texas Fort Drum, New York Fort Meade, Maryland Logistics Support Area Anaconda, Iraq Department of the Air Force Office of the Chief of Chaplains, Bolling Air Force Base, Washington, D.C. Office of the Inspector General, Arlington, Virginia Office of the Judge Advocate General, Washington, D.C. Office of Special Investigations, Andrews Air Force Base, Maryland Office of the Surgeon General, Falls Church, Virginia Sexual Assault Prevention and Response Program Office, Washington, D.C. Air Education and Training Command, Randolph Air Force Base, Texas Al Udeid Air Base, Qatar Balad Air Base, Iraq Lackland Air Force Base, San Antonio, Texas Bureau of Medicine and Surgery, Washington, D.C. Bureau of Naval Personnel, Millington, Tennessee Commander, Navy Installation Command, Washington, D.C. Fleet and Family Support Program, Counseling, Advocacy, and Prevention Program, Washington, D.C. Naval Criminal Investigative Service, Washington, D.C. Naval Education Training Command, Pensacola, Florida Center for Personal and Professional Development, Virginia Beach, Navy Medical Manpower Personnel Training and Education Command, Office of the Assistant Secretary of the Navy, Manpower and Reserve Affairs, Washington, D.C. Office of the Naval Inspector General, Washington, D.C. Office of the Chief of Navy Chaplains, Washington, D.C. Office of the Judge Advocate General, Washington, D.C. Naval Station Norfolk, Virginia Naval Support Activity, Bahrain Criminal Investigative Division, Arlington, Virginia Manpower and Reserve Affairs Sexual Assault Prevention and Response Office, Quantico, Virginia Office of the Chaplains, Arlington, Virginia Office of the Judge Advocate Division, Arlington, Virginia Camp Lejeune, North Carolina Camp Ramadi, Iraq Marine Corps Base Quantico, Virginia Office for Civil Rights and Civil Liberties, Washington, D.C. Coast Guard Investigative Service, Arlington, Virginia Health and Safety Directorate, Office of Work Life, Washington, D.C. Office of the Chaplain of the Coast Guard, Washington, D.C. Office of Civil Rights, Washington, D.C. Office of the Coast Guard Headquarters Chaplain, Washington, D.C. Office of Military Justice, Washington, D.C. Fifth District, Sector Hampton Roads Integrated Support Command Portsmouth, Portsmouth, Virginia Yorktown Training Center, Yorktown, Virginia Patrol Forces Southwest Asia, Naval Support Activity, Bahrain Veteran’s Health Administration National Center for Posttraumatic Stress Disorder, White River Women Veteran’s Health Division, Washington, D.C. We conducted this performance audit from June 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In both the Department of Defense (DOD) and the Coast Guard, commanders are responsible for discipline of misconduct, including sexual assault, and they have a variety of judicial and administrative options at their disposal. Commanders’ options are specified in the Uniform Code of Military Justice (UCMJ) and the Manual for Courts- Martial and include: Trial by courts-martial, the most severe disposition option, which can lead to many different punishments including death, prison time, forfeiture of pay and allowances, reduction in rank, and punitive separation from military service. Nonjudicial punishment, pursuant to Article 15 of the UCMJ, which allows for a number of punishments such as reducing a members’ grade, forfeiture of pay, adding extra duty, and imposing restrictions on freedom. Administrative actions, which are corrective measures that may result in a variety of actions such as issuing a reprimand, admonition, counseling, extra military instruction, or the administrative withholding of privileges. Other actions include rehabilitation and reassignment, administrative reduction for inefficiency, bar to reenlistment, and administrative separation. Commanders may also elect to take no action if evidence of an offense is not sufficient. In determining punishment, commanders may elect to utilize many administrative options in conjunction with courts-martial convictions or nonjudicial punishments. The use of such actions can have significant negative career and employment repercussions for the accused, both within the military and in the civilian community. The Manual for Courts-Martial provides a list of factors that commanders should consider when determining how to dispose of a criminal offense. These factors include: the character and military service of the accused, the nature of and circumstances surrounding the offense and the extent of harm caused, the appropriateness of the authorized punishment to the particular accused or offense, possible improper motives of the accuser, reluctance of the victim or others to testify, cooperation of the accused in the apprehension or conviction of others, the availability and likelihood of prosecution by another jurisdiction and the existence of jurisdiction over the accused and the offender, and the availability and admissibility of evidence. Ordinarily, the immediate commander of an individual accused or suspected of committing an offense is responsible for determining how to dispose of the offense. However, the immediate commander who lacks sufficient authority to take action may forward the matter to a superior commander for disposition. A decision by a lower-level commander does not prevent a different disposition by a superior commander. Further, commanders’ decisions are typically made after consulting with the supporting legal office (e.g., judge advocate). DOD collects and reports data in its annual report to Congress on the disposition of reported sexual assault incidents in the military. Investigations of sexual assaults and the outcomes of these cases may cross reporting periods, and commanders may not always have jurisdiction to take actions against some alleged offenders. DOD reported that there were 1,955 completed investigations of reported sexual assault cases in fiscal year 2007 resulting from all unrestricted reports of alleged sexual assault incidents made during or prior to fiscal year 2007. These 1,955 completed investigations involved 2,212 alleged offenders including servicemembers who fall under a military commander’s legal authority and nonservicemembers, such as civilians or foreign nationals, who may not be subject to military law. Some cases had multiple alleged offenders, victims, or both. Of the 2,212 alleged offenders resulting from all investigations completed in fiscal year 2007, commanders had sufficient evidence of a crime to support taking actions against 600 (27 percent) of these alleged offenders. Action against 572 alleged offenders was still pending as of September 30, 2007; once these dispositions are completed, commanders will have taken action against a total of 1,172 alleged offenders (53 percent). As shown in table 8, as of September 30, 2007, slightly more than half of these alleged offenders received command actions consisting of courts-martial, nonjudicial punishment, or other administrative actions or discharges. Judge advocates told us that commanders almost always dispose of rapes through courts-martial. As shown in table 9, commanders did not take direct action against 1,040 alleged offenders for a variety of reasons. For example, some of these alleged offenders were not subject to military law, other alleged offenders could not be identified, and in some instances the alleged sexual assault was unsubstantiated, unfounded, or there was insufficient evidence that an offense occurred. Although DOD does not track information about indirect actions commanders may take against offenders who are not subject to military law, judge advocates at installations we visited overseas told us that commanders could bar a foreign national or contractor who commits a crime from the installation, but were otherwise limited in actions they could take against alleged offenders who are not subject to the UCMJ. They told us that generally, commanders must rely on foreign governments to prosecute foreign nationals who commit crimes. Officials also stated that because there is no formal system to track individuals barred from installations, it is not possible to ensure that foreign nationals barred from one base are barred from all bases in a geographic region. Commanders also have limited avenues to address misconduct or crimes committed by contractors. During fiscal year 2007, the Coast Guard Investigative Service completed investigations for 62 of the 72 sexual assault incidents reported during fiscal year 2007. For these 62 completed investigations, the Coast Guard identified 67 alleged offenders including servicemembers who are under a military commander’s legal authority as well as nonservicemembers who may not be subject to military law. Of the 67 alleged offenders, commanders had sufficient evidence of a crime to support taking action against 19 alleged offenders (see table 10). Actions against 23 alleged offenders were still pending as of April 30, 2008. Commanders did not take action against 25 alleged offenders because evidence was unsubstantiated, unfounded, insufficient, the victim recanted, or the alleged offender died; the alleged offender was not identified; or the alleged offender was a nonservicemember who was not subject to the UCMJ. The Department of Defense (DOD) is required by law to collect and report data on sexual assault incidents involving servicemembers in active duty status to Congress annually. The Coast Guard collects similar data, but does not report these data to Congress because it is not statutorily required to do so. Servicemembers on active duty in DOD may report an alleged sexual assault using either the unrestricted or restricted reporting options. As previously discussed, an unrestricted report of an alleged sexual assault incident is provided to the chain of command or law enforcement for investigation. The military criminal investigative organizations within each military service and the Coast Guard are responsible for investigating crimes, including sexual assaults in which servicemembers are either alleged offender(s) or victim(s) and for documenting case data including information on alleged offenders and victims and the disposition of cases. A restricted report is a confidential report of an alleged sexual assault that can be made without initiating an investigation or notifying the chain of command. Within DOD, a restricted report may be made to either a Sexual Assault Response Coordinator (SARC) or a victim advocate at an installation; within the Coast Guard, a restricted report may be made to the Employee Assistance Program Coordinator (EAPC) or victim support person; and within both DOD and the Coast Guard a restricted report may also be made to medical personnel. When a restricted report is made, a commander is usually notified by the SARC or EAPC that an assault has occurred; however, the commander should not be informed of the victim’s identity or any other information that could lead to identification, such as gender or rank. The SARC in DOD or the EAPC in the Coast Guard generally collects limited data about the alleged victim and the alleged incident because the purpose of the restricted reporting option is to provide assistance to victims rather than collect incident-related statistics. In DOD, SARCs provide these data to their service-level Sexual Assault Prevention and Response Office, whereas in the Coast Guard the EAPC provides similar data to the Office of Work Life. Regardless of the reporting option used, victims in both DOD and the Coast Guard can receive medical care, advocacy, and counseling services. At any time, an alleged victim may choose to change from a restricted report to an unrestricted report and participate in an investigation. Of the 2,688 reports of sexual assault incidents that DOD received during fiscal year 2007, 2,085 were made using the unrestricted reporting option. However, a number of these reports were not substantiated as of September 30, 2007, when DOD compiled data for inclusion in its fiscal year 2007 annual report to Congress. According to DOD officials, a case may not be substantiated for a number of reasons. For example, a victim may recant an accusation, thus preventing an investigation from proceeding; evidence may be found insufficient to substantiate the alleged assault; or the alleged offender may have died. As of September 30, 2007, DOD reported that about 36 percent (741) of investigations of alleged sexual assault were ongoing. According to DOD’s fiscal year 2007 annual report, in 72 percent (1,511) of the 2,085 unrestricted reports of alleged sexual assault, the alleged victims were servicemembers; in the remaining 28 percent (574) of unrestricted reports the alleged victims were nonservicemembers, such as civilians or foreign nationals. About 60 percent of these reports involved an alleged rape and about one-third involved alleged indecent assaults. DOD also reported that about 9 percent (133) of all sexual assaults reported during fiscal year 2007 using the unrestricted reporting option were made by males who were either servicemembers or nonservicemembers. Table 11 shows the number of servicemembers and victims by gender who reported a sexual assault incident during fiscal year 2007 using the unrestricted reporting option. Table 12 shows the number of investigations of reports of alleged sexual assault made during fiscal year 2007 by type of alleged offense and whether the victim was a servicemember or nonservicemember. Because reported data are incident-based and a single sexual assault may involve more than one subject or victim, the numbers of investigations and reports in table 12 do not necessarily reflect the number of actual alleged victims or offenders. According to DOD’s fiscal year 2007 annual report, these 2,085 reports involved 2,243 alleged victims, of whom 1,620 were servicemembers and 623 were nonservicemembers. The 2,085 unrestricted reports involved 1,908 alleged offenders who were servicemembers and 92 who were nonservicemembers. The identities of 305 alleged offenders were unknown. DOD reported that about 8 percent of victims for all investigations completed during fiscal year 2007 were males. Table 13 shows the number of completed investigations by type of sexual assault and gender of victim. During fiscal year 2007, DOD reported that servicemembers initially made 705 reports of alleged sexual assault using the restricted reporting option. However, in 102 of these instances, victims chose to change from a restricted to unrestricted report. According to DOD, about 69 percent (489) of the 705 restricted reports involved an alleged rape and almost 18 percent (125) involved alleged indecent assaults. Table 14 shows the number of alleged reports of sexual assault made using the restricted reporting option by type of alleged offense during fiscal year 2007. According to a DOD official, because the military services do not provide case-level data to DOD, the department is not able to determine the type of offense for the 102 restricted reports that were changed to unrestricted reports. DOD reported that male servicemembers made almost 7 percent (41) of the 705 initial reports of sexual assaults using the restricted reporting option during fiscal year 2007. Because no investigation is conducted when a victim reports a sexual assault using the restricted reporting option, the numbers of reports in table 14 reflect the number of actual alleged victims. Since January 2004, the Coast Guard has voluntarily collected data on sexual assaults involving its members as either the alleged offender or victim, although it is not subject to the same statutory requirements as DOD for collecting these data or reporting such information to Congress. The Coast Guard, which did not offer a restricted reporting option until December 2007, documented 72 total reports of alleged sexual assaults involving Coast Guard members during fiscal year 2007. As shown in table 15, the majority of these reports were for alleged rape and the majority of alleged victims were Coast Guard members. However, not all of these reported alleged sexual assaults have been substantiated because investigations may have been ongoing, evidence was found to be insufficient to substantiate the allegation, or victims may have recanted accusations. For example, the Coast Guard reported 10 investigations of alleged sexual assaults were ongoing as of April 2008. Because data are incident-based and a single assault may involve more than one alleged offender or victim, the number of reports in table 15 does not necessarily reflect the number of actual alleged offenders or victims. The Coast Guard Investigative Service determined that 78 alleged offenders and 78 alleged victims were involved in the 72 incidents reported during fiscal year 2007. The majority of alleged offenders and victims were Coast Guard members, as shown in table 16. The Coast Guard identified 67 victims in the completed investigations, 51 of whom were servicemembers in the Coast Guard or another military service and 16 of whom were not servicemembers. All but one victim in the completed investigations were female. Table 17 shows the number of completed investigations by type of sexual assault and gender of the victim. Thi surve part of review the U.S. Govermet Accounabilit Office (GAO)—an ag of the Cngress—i cocting of xuaassault revetio and respnsro the milit ervice. The pue of thi surve to rovide insht ito the effectivess of ech ervice’ xuaassault olicie, trng, rocedre, and respnse capabilitie. Fidings will used i reort and tetimony to Cngress. Providingformtio thi surve volun anannymous. All repon are trictly confidential, and no individual repon will be reported. Please do ot write name o thitionnaire. We apprecite king the time to comlete thi imortansurve. We ecoag to answer ech qtio as comletel as ssle. Before choong an answer, lease red the fll qtio anll respnse choicerefll. There re o riht or wrong answer. Rather, hold answer ech qtio the way tht reflect ernal oions and experiece. The surve hold tke approimtel-30 minute to comlete. Thi urvey a about both exual harassment and exual assault. When reading the quetion, pleae note whether we are referring to experience with exual harassment or exual assault. A a reminder, DOD define exual harassment a follow: Sexual harassment - form of xual dicriminatio tht ivolve unwelcome xuadvance, req for xual fvor, and other verbal or ysicl coct of xuanare whe submissio to such coct ide either explicitl or imlicitl term or coditio of ern’s jo, pay, or creer, or submissio to or rejectio of such coct by er used as bas for creer or emlomet deciions ffecting ther, or such coct has the pue or effect of unreasnabterfering with andividua’s work erformance or crete antimidting, hotile, or offensive workingviromet (DODD 130.2) 1. At your current location, do you think your direct upervior (military or civilian) create a climate that dicourage exual harassment from occurring? Ye........................... No............................. Not sure................... 2. If exual harassment hould occur at your current location, do you think your direct upervior (military or civilian) would address it? Ye........................... No............................. Not sure................... Unit—Command or oertionaunit to which re assgned. Deployed location—Stioed loctio other than r home tio. Temporary Duty (TDY)/Temporary Additional Duty (TAD)—Trvel i which ernnel rem under the direct cotrol of their paret organiztions (e.., meetings, coferece, tteance chool or coe of instrctio). Home tation—The ermant loctio of ctive d unit anRerve Comunit. Thi loctioay e either inside or oide the Ctil Uited Ste. 3. In your opinion, i exual harassment a problem in the following? M unit ......................................................... Deloed loctions..................................... Whe TDY or TAD...................................... Home tio.............................................. 4. How much do your concern about exual harassment incident in the military impact your intention to remain in the military once your commitment i met? A ret del............. Somewht ............... Not ll.................. A a reminder DOD define exual assault a follow: Sexual assault - itetionaxual coct, chcterized by use of force, ysicl thret or abuse of authorit or whe the victim doe ot or cannot const. It icldeape, consnsuaodom (orl or ana), ideceassault (unanted, inapprorite coct or fodling), or ttem to commit thect. Sxuaassault can occr withot regard to der or spusal reltionshi or age of victim. “Cns” ll ot e deemed or constred to mean the filre by the victim to offer ysicl reance. Cnst i ot ive whe er us force, thret of force, coercio, or whe the victim i aslee, iapacitted or unconscious. (DODD 649.01) 5. At your current location, do you think your direct upervior (military or civilian) create a climate that dicourage exual assault from occurring? Ye........................... No............................. Not sure................... . If exual assault hould occur at your current location, do you think your direct upervior (military or civilian) would address it? Ye........................... No............................. Not sure................... 7. At your current location, how likely would you be to report a exual assault of another ervicemember? Etremel likel................................................. Ver likel........................................................... Modertel likel............................................... Somewht likel................................................. Not ll likel................................................... 8. Would you report a peronal experience of exual assault to the following authoritie, individual, or organization at your current location? Officer i r ch of command................................... Sff commissioed officer (NCO) i ch of command .............................................................. Nocommissioed officer (NCO) i r ch of command ............................................................. Aother ervicememer ................................................... Direct supervior (milit or civilian) ........................... DOD civilian ....................................................................... Civilian cotrctor............................................................. Civilian assault cri ceter/hotlie/ hellie ............. Uit/uniform victim dvocte (VA) ................................. Civilian victim dvocte.................................................... Sxuaassault respnse coordinator (SARC)................ Inslltio medicernnel......................................... Civilian hospiternnel............................................... Milit criminal ivetigative organiztions (e.., OSI, CID, NCIS, CGIS).............................................. Milit olice (e.., Provot Mll, Master of Arm) ......................... Civilianw eforcemet................................................... Cap............................................................................... Militer.................................................................. Fmil memer, fried....................................................... Other (leasspecif)........................................................ 9. How afe do you feel at your current location from being exually assaulted in the following place? At work/o............................... I barrck/living anleeng renslltio roun.......... Onslltio roun, i other reas.................................................. Off inslltio roun.................. 10. In your opinion, i exual assault a problem in the following? M unit ......................................................... Deloed loctions..................................... Whe TDY or TAD...................................... Home tio.............................................. 11. How much do your concern about exual assault incident in the military impact your intention to remain in the military once your commitment i met? A ret del............. Somewht ............... Not ll.................. 12. Do you agree or diagree with the tatement: “Tolerance for exual harassment create a perception that exual assault may be acceptable.” Strong agree .................................................... Aree ................................................................... Neither agree or disagree ............................... Disagree .............................................................. Strong disagree ............................................... 13. Have you attended training that addressed exual assault issue at any time during the pat 12 month? Ye........................... No............................. Don’t kow ............. 14. Wa the exual assault-related training you received during the pat 12 month in the following format? Pretio by annstrctor, such as SARC ................................ Writteteri rovided withoretio......................... Video ................................................................................................... Computer-based, iclding we-based or iteret trng......... Prticipator trng (cenario-based trng, kit) ................. 15. Would you know how to do the following at your current location? Rort xuaassault usng the retricted (cofidetil) reortingtio.................................................................................... Rort xuaassault usng the unretricted reortingtio.................................................................................... Avoid ituations tht miht icrease the rik of xuaassault ........................................................................... O medicl cre following xuaassault .............................. O counseling or mel helth cre following xuaassault ................................................................... Cct xuaassault respnse coordinator (SARC) ........... Cct r victim dvocte (VA).................................................... O dditional rerce or iformtio the reas above ............................................................................... We undertand that thi a very difficult ubject. We would like to reiterate that all repon are confidential, and no individual’ information i identifiable. . How much of a problem are the following ituation at your current location? Sxuatorie or joke tht were offensive to ....................................... Other referring to eole of der insulting or offensive term.. Uwelcome ttem to drto diussio of xual mtter (e.., ttem to diuss or commet o life)....................................... Offensive remrk abappance, od, or xuactivitie.. Gere or odanguage of xuanare tht embarrassed or offeded ............................................ Bri or ome kid of rewrd for specil tretmet to engage i xuaehvior .................................................... Thre of retlitio or revenge for ot eing xuall cooertive (such as by metiong an upcoming review or evuatio)........................................... Toching ay tht mde feel uncomfortable, such as ttem to troke, fodle, or kiss ...................... Imlictions of etter assgnme or etter tretmet if were xuall cooertive............................................... Bd tretmet for refusng to hve ............................................................. 17. If you were exually assaulted and could obtain medical and/or mental health care while (a) remaining anonymou and (b) being certain that there would not be an invetigation, would you report the incident? Ye........................... No............................. Not sure................... 18. Were you exually assaulted during the pat 12 month while in the military? Ye........................... No............................. Skip to 30 on page 14 19. In which of the following location did the incident occur? At home inslltionside the Uited Ste.................. At home inslltioide the Uited Ste................ At deloed loctio......................................................... While TDY or TAD ............................................................... 20. Did you report the incident in any of the following way to any authoritie, individual, or organization? I used retricted (cofidetil) reorting.................................... I used unretricted reorting........................................................ I reorted it, but I’ot sure whether I used retricted (cofidetil) or unretricted reorting....................................... I did ot reort the icidet.......................................................... Skip to 22 on page 12 21. To which authoritie, individual, or organization did you report the incident? Officer i r ch of command................................... Sff commissioed officer (NCO) i ch of command .............................................................. Nocommissioed officer (NCO) i r ch of command ............................................................. Aother ervicememer ................................................... Direct supervior (milit or civilian) ........................... DOD civilian ....................................................................... Civilian cotrctor............................................................. Civilian assault cri ceter/hotlie/hellie .............. Uit/uniform victim dvocte (VA) ................................. Civilian victim dvocte.................................................... Sxuaassault respnse coordinator (SARC)................ Inslltio medicernnel......................................... Civilian hospiternnel............................................... Milit criminal ivetigative organiztions (e.., OSI, CID, NCIS, CGIS).............................................. Milit olice (e.., Provot Mll, Master of Arm) ......................... Civilianw eforcemet................................................... Cap............................................................................... Militer.................................................................. Fmil memer, fried....................................................... Other (leasspecif)........................................................ PleaSkip to Quetion 23 on page 12 after completing Quetion 21 22. For which of the following reaon() did you not report the incident? Fered otrcim, hassmet, or ridicle by eer........................ I thought I wold e labeled troublemker ................................... I thought othing wold e doe ....................................................... Embarrassmet or me .................................................................. Threteed with ome form of retlitio or revenge..................... Not threteed with retlitio or revenge, but fered ome form of retlitio or revenge............................................................. Pressured by omeoe i itio of authorit............................. Fered I wold punhed for ifrctions/violtions ................... I did ot waneole ssng abt the assault........................... I did ot want to ffect m unit .......................................................... I thought eole wold ot elieve me ............................................. Not re of reorting rocedre.................................................... Fer of assault eing reted............................................................ Did ot want to et offeder i trouble............................................. I thought mxperiece was commo............................................. I h revious gative experiece reorting ancidet........... Other (leasspecif)......................................................................... 23. Did you receive medical care a a reult of the incident? I received medicl cre.................................................................. I did ot receive medicl cre ecausre was ot ilable.................................................................... I did ot eek medicl cre ........................................................... 24. Where did you receive medical care for the incident? Milit tretmet fcilit (MTF)................................................ Civilian hospitl ............................................................................. Other (leasspecif)................................................................... 25. How atified or dissatified were you with the quality of medical care you received? Ver satified................................................................................... Satified ........................................................................................... Neither satified or dissatified.................................................. Dissatified...................................................................................... Ver dissatified.............................................................................. . Why were you atified or dissatified with the quality of medical care you received? 27. Did you receive couneling or mental health care a a reult of the incident? I received counseling or mel helth cre................................ I did ot received counseling or mel helth cre ecausassance was ot ilable .......................................... I did ot eek counseling or mel helth cre......................... 28. How atified or dissatified were you with the quality of couneling or mental health care you received? Ver satified................................................................................... Satified ........................................................................................... Neither satified or dissatified.................................................. Dissatified...................................................................................... Ver dissatified.............................................................................. 29. Why were you atified or dissatified with the quality of mental health care or couneling you received? 30. Have you erved away from your home tation at any time during the pat 12 month? Ye........................... No ............................ Skip to 41 on page 18 31. Which of the following location() did you erve in during the pat 12 month? Afanan ....................................................................................... Alban............................................................................................... Bhr................................................................................... Bosn..................................................................................... Djiti................................................................................... Ir ........................................................................................ Jordan..................................................................................... Koovo.................................................................................... Kit .................................................................................... Kgyan.............................................................................. Oman...................................................................................... Pkian.................................................................................. Philipp.............................................................................. Q....................................................................................... Saudi Arab............................................................................ Syri....................................................................................... Tjikian................................................................................ Uited Arab Emirte.............................................................. Uzekian.............................................................................. Yeme..................................................................................... (Araban S North of 10 deree orth ltitde and wet of 68 dereeast longitde, Glf of Ade, Glf of Oman, Peran Glf, or Red S......... (Adritic S or Ioan S North of the 39th Pllel) .......... Aother loctio ot lited above (leasspecif)............................................................................... 32. Of the location you indicated above, where did you erve the longet amount of time? 33. From which intallation did you deploy? 34. Did you receive pre-deployment training that addressed exual assault? Ye........................... No............................. Not sure................... Unit—Command or oertionaunit to which re assgned. Home tation—The ermant loctio of ctive d unit anRerve Comunit. Thi loctioay e either inside or oide the Ctil Uited Ste. Deployed—Stioed loctio other than r home tio. PLEASE ANSWER THE FOLLOWING QUESTIONS FROM THE PERSPECTIVE OF THE LOCATION YOU INDICATED IN QUESTION 32. 35. Do you think exual harassment incident are taken more or less eriouly when at home tation or when deployed? More erious whe t home tio ......................................... Equall erious whe t home tio as whe deloed........ Less erious whe t home tio............................................. . Do you think exual assault incident are taken more or less eriouly at home tation or when deployed? More erious whe t home tio ......................................... Equall erious whe t home tio as whe deloed ....... Less erious whe t home tio............................................. 37. Do you think ervicemember in your unit would be more or less likely to report a exual assault of another ervicemember when at home tation or when deployed? More likel t home tio................................................................ Equall likel t home tio as whe deloed............................ Less likel t home tio................................................................. Not sure................................................................................................. 38. How afe did you feel from being exually assaulted at the following time and location while you were deployed? At work/o........... I barrck/living anleeng renslltio roun..... Onslltio roun, i other reas .............................. Off inslltio roun........................ 39. Do you believe the rik for a exual assault to occur i less or greater when at home tation ver when deployed? The rik i less whe t home tio............................................ The rik i the same t home tio as whe deloed................................................................................. Skip to 41 on page 18 The rik i reter t home tio................................................. 40. Why do you believe the rik of a exual assault occurring differ when at home tation ver when deployed? Temporary Duty (TDY)/Temporary Additional Duty (TAD)—Trvel i which ernnel rem under the direct cotrol of their paret organiztions (e.., meetings, coferece, tteance chool or coe of instrctio). 41. Have you been TDY or TAD at any time during the pat 12 month? Ye...................................................................................................... No ....................................................................................................... Skip to 44 on page 19 42. Do you believe the rik for a exual assault to occur i less or greater when at home tation ver when TDY or TAD? The rik i less whe t home inslltio.................................... The rik i the same t home inslltio as whe TDY or TAD ........................................................................... Skip to 44 on page 19 The rik i reter t home inslltio ........................................ 43. Why do you believe the rik of a exual assault differ when at home tation ver when TDY or TAD? Pleae explain. 44. What i your current pay grade? E1 to E4................................................................................................. E to E9................................................................................................. W1 to W............................................................................................... O1 to O3 ................................................................................................ O4 to O6 ................................................................................................ 45. What i your branch of ervice? Arm.................................................................................................... N..................................................................................................... MriCorps ...................................................................................... Air Force ............................................................................................. Cast Guard ........................................................................................ . What i your component? Active d......................................................................................... Rerve .............................................................................................. Ntional Guard.................................................................................. 47. What i your age range? 18 to 24 ................................................................................................ to 30 ................................................................................................ 31 to 3................................................................................................ 36 to 40 ................................................................................................ 41 to 4................................................................................................ and over.......................................................................................... 48. What i your gender? Mle.................................................................................................... Femle ............................................................................................... 49. With repect to the military ervice’ exual assault prevention and repone program, what message would you have u (GAO) take back to Congress? The U.S. Govermet Accounabilit Office (GAO), an ag of the Cngress, i cocting thi surve as part of review o xuaassault revetio and respnsro the milit ervice. The pue of thi surve to rovide insht ito the effectivess of ech ervice’s xuaassault olicie, trng, rocedre, and respnse capabilitie. Fidings will used i reort and tetimony to Cngress. Providingformtio thi surve volun anannymous. All repon are trictly confidential, and no individual repon will be reported. Please do ot write name o thitionnaire. We apprecite king the time to comlete thi imortansurve. We ecoag to answer ech qtio as comletel as ssle. Before choong an answer, lease red the fll qtio anll respnse choicerefll. There re o riht or wrong answer. Rather, hold answer ech qtio the way tht reflect ernal oions and experiece. The surve hold tke approimtel-30 minute to comlete. Thi urvey a about both exual harassment and exual assault. When reading the quetion, pleae note whether we are referring to experience with exual harassment or exual assault. A a reminder, DOD define exual harassment a follow: Sexual harassment - form of xual dicriminatio tht ivolve unwelcome xuadvance, req for xual fvor, and other verbal or ysicl coct of xuanare whe submissio to such coct ide either explicitl or imlicitl term or coditio of ern’s jo, pay, or creer, or submissio to or rejectio of such coct by er used as bas for creer or emlomet deciions ffecting ther, or such coct has the pue or effect of unreasnabterfering with andividua’s work erformance or crete antimidting, hotile, or offensive workingviromet (DODD 130.2) 1. At your current location, do you think your direct upervior (military or civilian) create a climate that dicourage exual harassment from occurring? Ye........................... No............................. Not sure................... 2. If exual harassment hould occur at your current location, do you think your direct upervior (military or civilian) would address it? Ye........................... No............................. Not sure................... Unit—Command or oertionaunit to which re assgned. Deployed location—Stioed loctio other than r home tio. Temporary Duty (TDY)/Temporary Additional Duty (TAD)—Trvel i which ernnel rem under the direct cotrol of their paret organiztions (e.., meetings, coferece, tteance chool or coe of instrctio). Home tation—The ermant loctio of ctive d unit anRerve Comunit. Thi loctioay e either inside or oide the Ctil Uited Ste. 3. In your opinion, i exual harassment a problem in the following? M unit ......................................................... Deloed loctions..................................... Whe TDY or TAD...................................... Home tio.............................................. 4. How much do your concern about exual harassment incident in the military impact your intention to remain in the military once your commitment i met? A ret del............. Somewht ............... Not ll.................. A a reminder DOD define exual assault a follow: Sexual assault - itetionaxual coct, chcterized by use of force, ysicl thret or abuse of authorit or whe the victim doe ot or cannot const. It icldeape, consnsuaodom (orl or ana), ideceassault (unanted, inapprorite coct or fodling), or ttem to commit thect. Sxuaassault can occr withot regard to der or spusal reltionshi or age of victim. “Cns” ll ot e deemed or constred to mean the filre by the victim to offer ysicl reance. Cnst i ot ive whe er us force, thret of force, coercio, or whe the victim i aslee, iapacitted or unconscious. (DODD 649.01) 5. At your current location, do you think your direct upervior (military or civilian) create a climate that dicourage exual assault from occurring? Ye........................... No............................. Not sure................... . If exual assault hould occur at your current location, do you think your direct upervior (military or civilian) would address it? Ye........................... No............................. Not sure................... 7. At your current location, how likely would you be to report a exual assault of another ervicemember? Etremel likel................................................. Ver likel........................................................... Modertel likel............................................... Somewht likel................................................. Not ll likel................................................... 8. Would you report a peronal experience of exual assault to the following authoritie, individual, or organization at your current location? Officer i r ch of command................................... Sff commissioed officer (NCO) i ch of command .............................................................. Nocommissioed officer (NCO) i r ch of command ............................................................. Aother ervicememer ................................................... Direct supervior (milit or civilian) ........................... DOD civilian ....................................................................... Civilian cotrctor............................................................. Civilian assault cri ceter/hotlie/ hellie ............. Uit/uniform victim dvocte (VA) ................................. Civilian victim dvocte.................................................... Sxuaassault respnse coordinator (SARC)................ Inslltio medicernnel......................................... Civilian hospiternnel............................................... Milit criminal ivetigative organiztions (e.., OSI, CID, NCIS, CGIS).............................................. Milit olice (e.., Provot Mll, Master of Arm) ......................... Civilianw eforcemet................................................... Cap............................................................................... Militer.................................................................. Fmil memer, fried....................................................... Other (leasspecif)........................................................ 9. How afe do you feel at your current location from being exually assaulted in the following place? At work/o............................... I barrck/living anleeng renslltio roun.......... Onslltio roun, i other reas.................................................. Off inslltio roun.................. 10. In your opinion, i exual assault a problem in the following? M unit ......................................................... Deloed loctions..................................... Whe TDY or TAD...................................... Home tio.............................................. 11. How much do your concern about exual assault incident in the military impact your intention to remain in the military once your commitment i met? A ret del............. Somewht ............... Not ll.................. 12. Do you agree or diagree with the tatement: “Tolerance for exual harassment create a perception that exual assault may be acceptable.” Strong agree .................................................... Aree ................................................................... Neither agree or disagree ............................... Disagree .............................................................. Strong disagree ............................................... 13. Did you receive pre-deployment training that addressed exual assault prior to deploying to your current location? Ye........................... No............................. Not sure................... 14. Wa the training you received that addressed exual assault prior to deploying in the following format? Pretio by annstrctor, such as SARC ................................ Writteteri rovided withoretio......................... Video ................................................................................................... Computer-based, iclding we-based or iteret trng......... Prticipator trng (cenario-based trng, kit) ................. 15. Have you attended training that addressed exual assault issue ince you arrived at your current location? Ye........................... No............................. Don’t kow ............. . Wa the exual assault-related training you received at your current location in the following format? Pretio by annstrctor, such as SARC ................................ Writteteri rovided withoretio......................... Video ................................................................................................... Computer-based, iclding we-based or iteret trng......... Prticipator trng (cenario-based trng, kit) ................. 17. Would you know how to do the following at your current location? Rort xuaassault usng the retricted (cofidetil) reortingtio.................................................................................... Rort xuaassault usng the unretricted reortingtio.................................................................................... Avoid ituations tht miht icrease the rik of xuaassault ........................................................................... O medicl cre following xuaassault .............................. O counseling or mel helth cre following xuaassault ................................................................... Cct xuaassault respnse coordinator (SARC) ........... Cct r victim dvocte (VA).................................................... O dditional rerce or iformtio the reas above ............................................................................... We undertand that thi a very difficult ubject. We would like to reiterate that all repon are confidential, and no individual’ information i identifiable. 18. How much of a problem are the following ituation at your current location? Sxuatorie or joke tht were offensive to ....................................... Other referring to eole of der insulting or offensive term.. Uwelcome ttem to drto diussio of xual mtter (e.., ttem to diuss or commet o life)....................................... Offensive remrk abappance, od, or xuactivitie.. Gere or odanguage of xuanare tht embarrassed or offeded ............................................ Bri or ome kid of rewrd for specil tretmet to engage i xuaehvior .................................................... Thre of retlitio or revenge for ot eing xuall cooertive (such as by metiong an upcoming review or evuatio)........................................... Toching ay tht mde feel uncomfortable, such as ttem to troke, fodle, or kiss ...................... Imlictions of etter assgnme or etter tretmet if were xuall cooertive............................................... Bd tretmet for refusng to hve ............................................................. 19. If you were exually assaulted and could obtain medical and/or mental health care while (a) remaining anonymou and (b) being certain that there would not be an invetigation, would you report the incident? Ye........................... No............................. Not sure................... 20. Were you exually assaulted during the pat 12 month while in the military? Ye........................... No............................. Skip to 32 on page 14 21. In which of the following location did the incident occur? At home inslltionside the Uited Ste.................. At home inslltioide the Uited Ste................ At deloed loctio......................................................... While TDY or TAD ............................................................... 22. Did you report the incident in any of the following way to any authoritie, individual, or organization? I used retricted (cofidetil) reorting.................................... I used unretricted reorting........................................................ I reorted it, but I’ot sure whether I used retricted (cofidetil) or unretricted reorting....................................... I did ot reort the icidet.......................................................... Skip to 24 on page 12 23. To which authoritie, individual, or organization did you report the incident? Officer i r ch of command................................... Sff commissioed officer (NCO) i r ch of command .............................................................. Nocommissioed officer (NCO) i ch of command ............................................................. Aother ervicememer ................................................... Direct supervior (milit or civilian) ........................... DOD civilian ....................................................................... Civilian cotrctor............................................................. Civilian assault cri ceter/hotlie/hellie .............. Uit/uniform victim dvocte (VA) ................................. Civilian victim dvocte.................................................... Sxuaassault respnse coordinator (SARC)................ Inslltio medicernnel......................................... Civilian hospiternnel............................................... Milit criminal ivetigative organiztions (e.., OSI, CID, NCIS, CGIS).............................................. Milit olice (e.., Provot Mll, Master of Arm) ......................... Civilianw eforcemet................................................... Cap............................................................................... Militer.................................................................. Fmil memer, fried....................................................... Other (leasspecif)........................................................ PleaSkip to Quetion 25 on page 12 after completing Quetion 23 24. For which of the following reaon() did you not report the incident? Fered otrcim, hassmet, or ridicle by eer........................ I thought I wold e labeled troublemker ................................... I thought othing wold e doe ....................................................... Embarrassmet or me .................................................................. Threteed with ome form of retlitio or revenge..................... Not threteed with retlitio or revenge, but fered ome form of retlitio or revenge............................................................. Pressured by omeoe i itio of authorit............................. Fered I wold punhed for ifrctions/violtions ................... I did ot waneole ssng abt the assault........................... I did ot want to ffect m unit .......................................................... I thought eole wold ot elieve me ............................................. Not re of reorting rocedre.................................................... Fer of assault eing reted............................................................ Did ot want to et offeder i trouble............................................. I thought mxperiece was commo............................................. I h revious gative experiece reorting ancidet........... Other (leasspecif)......................................................................... 25. Did you receive medical care a a reult of the incident? I received medicl cre.................................................................. I did ot receive medicl cre ecausre was ot ilable.................................................................... I did ot eek medicl cre ........................................................... . Where did you receive medical care for the incident? Milit tretmet fcilit (MTF)................................................ Civilian hospitl ............................................................................. Other (leasspecif)................................................................... 27. How atified or dissatified were you with the quality of medical care you received? Ver satified................................................................................... Satified ........................................................................................... Neither satified or dissatified.................................................. Dissatified...................................................................................... Ver dissatified.............................................................................. 28. Why were you atified or dissatified with the quality of medical care you received? 29. Did you receive couneling or mental health care a a reult of the incident? I received counseling or mel helth cre................................ I did ot received counseling or mel helth cre ecausassance was ot ilable .......................................... I did ot eek counseling or mel helth cre......................... 30. How atified or dissatified were you with the quality of couneling or mental health care you received? Ver satified................................................................................... Satified ........................................................................................... Neither satified or dissatified.................................................. Dissatified...................................................................................... Ver dissatified.............................................................................. 31. Why were you atified or dissatified with the quality of mental health care or couneling you received? Home tation—The ermant loctio of ctive d unit anRerve Comunit. Thi loctioay e either inside or oide the Ctil Uited Ste. 32. Have you erved at your home tation at any time during the pat 12 month? Ye........................... No ............................ Skip to 40 on page 17 33. What i your home tation? Unit—Command or oertionaunit to which re assgned. Home tation—The ermant loctio of ctive d unit anRerve Comunit. Thi loctioay e either inside or oide the Ctil Uited Ste. Deployed—Stioed loctio other than r home tio. PLEASE ANSWER THE FOLLOWING QUESTIONS FROM THE PERSPECTIVE OF THE LOCATION YOU INDICATED IN QUESTION 33. 34. Do you think exual harassment incident are taken more or less eriouly at your current location or when at home tation? More erious t crret loctio .............................................. Equall erious t crret loctio as whe t home tio.. Less eriousrret loctio....................................................... 35. Do you think exual assault incident are taken more or less eriouly at your current location or when at home tation? More erious t crret loctio .............................................. Equall erious t crret loctio as whe t home tio.. Less eriousrret loctio....................................................... . Do you think ervicemember in your unit would be more or less likely to report a exual assault of another ervicemember at your current location or when at home tation? More likel t crret loctio........................................................... Equall likel t crret loctio as whe t home tio........... Less likel t crret loctio............................................................ Not sure................................................................................................. 37. How afe did you feel from being exually assaulted at the following time and location when at home tation? At work/o........... I barrck/living anleeng renslltio roun..... Onslltio roun, i other reas .............................. Off inslltio roun........................ 38. Do you believe the rik for a exual assault to occur i less or greater at your current location ver when at home tation? The rik i less t crret loctio................................................. The rik i the same t crret loctio as whe t home tio................................................................. Skip to 40 on page 17 The rik i reter t crret loctio........................................... 39. Why do you believe the rik of a exual assault occurring differ when at home tation ver when deployed? Unit—Command or oertionaunit to which re assgned. 40. Not including your home tation, have you erved at another location at any time during the pat 12 month? Ye........................... No ............................ Skip to 49 on page 20 41. Excluding your current location, which of the following location() did you erve in during the pat 12 month? Afanan ....................................................................................... Alban............................................................................................... Bhr................................................................................... Bosn..................................................................................... Djiti................................................................................... Ir ........................................................................................ Jordan..................................................................................... Koovo.................................................................................... Kit .................................................................................... Kgyan.............................................................................. Oman...................................................................................... Pkian.................................................................................. Philipp.............................................................................. Q....................................................................................... Saudi Arab............................................................................ Syri....................................................................................... Tjikian................................................................................ Uited Arab Emirte.............................................................. Uzekian.............................................................................. Yeme..................................................................................... (Araban S North of 10 deree orth ltitde and wet of 68 dereeast longitde, Glf of Ade, Glf of Oman, Peran Glf, or Red S......... (Adritic S or Ioan S North of the 39th Pllel) .......... 42. Of the location you indicated above, where did you erve the longet amount of time? PLEASE ANSWER THE FOLLOWING QUESTIONS FROM THE PERSPECTIVE OF THE LOCATION YOU INDICATED IN QUESTION 42. 43. Do you think exual harassment incident are taken more or less eriouly at your current location or the other location? More erious t crret loctio .............................................. Equall erious t crret loctio as the other loctio........ Less eriousrret loctio....................................................... 44. Do you think exual assault incident are taken more or less eriouly at your current location or the other location? More erious t crret loctio .............................................. Equall erious t crret loctio as the other loctio........ Less erious t crret loctio.................................................. 45. Do you think ervicemember in your unit would be more or less likely to report a exual assault of another ervicemember at your current location or the other location? More likel t crret loctio........................................................... Equall likel t crret loctio as the other loctio.................. Less likel t crret loctio............................................................ Not sure................................................................................................. . How afe did you feel from being exually assaulted at the following time and location at the other location? At work/o........... I barrck/living anleeng renslltio roun..... Onslltio roun, i other reas .............................. Off inslltio roun........................ 47. Do you believe the rik for a exual assault to occur i less or greater at your current location ver the other location? The rik i less t crret loctio................................................. The rik i the same t crret loctio as as the other loctio....................................................................... Skip to 49 on page 20 The rik i reter t crret loctio........................................... 48. Why do you believe the rik of a exual assault occurring differ at your current location ver the other location? 49. What i your current pay grade? E1 to E4................................................................................................. E to E9................................................................................................. W1 to W............................................................................................... O1 to O3 ................................................................................................ O4 to O6 ................................................................................................ 50. What i your branch of ervice? Arm.................................................................................................... N..................................................................................................... MriCorps ...................................................................................... Air Force ............................................................................................. Cast Guard ........................................................................................ 51. What i your component? Active d......................................................................................... Rerve .............................................................................................. Ntional Guard.................................................................................. 52. What i your age range? 18 to 24 ................................................................................................ to 30 ................................................................................................ 31 to 3................................................................................................ 36 to 40 ................................................................................................ 41 to 4................................................................................................ and over.......................................................................................... 53. What i your gender? Mle.................................................................................................... Femle ............................................................................................... 54. With repect to the military ervice’ exual assault prevention and repone program, what message would you have u (GAO) take back to Congress? In addition to the contact named above, Marilyn K. Wasleski (Assistant Director), Krislin Bolling, Joanna Chan, Pawnee A. Davis, Konstantin Dubrovsky, K. Nicole Harms, Wesley A. Johnson, Ronald La Due Lake, Stephen V. Marchesani, Ayeke P. Messam, Amanda K. Miller, and Cheryl A. Weissman made significant contributions to the report. In addition, Sara G. Cradic, Kim Mayo, Sharon Reid, and Norris Smith III provided assistance during site visits.
|
In 2004, Congress directed the Department of Defense (DOD) to establish a comprehensive policy to prevent and respond to sexual assaults involving servicemembers. Though not required to do so, the Coast Guard has established a similar policy. In response to congressional requests and Senate Report No. 110-77, GAO evaluated the extent to which DOD and the Coast Guard (1) have developed and implemented policies and programs to prevent, respond to, and resolve sexual assault incidents involving servicemembers; (2) have visibility over reports of sexual assault involving servicemembers; and (3) exercise oversight over reports of sexual assault involving servicemembers. To conduct this review, GAO reviewed legislative requirements and DOD and Coast Guard guidance; analyzed sexual assault incident data; and obtained through surveys and interviews the perspective on sexual assault matters of more than 3,900 servicemembers. DOD and the Coast Guard have established polices and programs to prevent, respond to, and resolve reported sexual assault incidents involving servicemembers; however, implementation of the programs is hindered by several factors. GAO found that (1) DOD's guidance may not adequately address some important issues, such as how to implement its program in deployed and joint environments; (2) most, but not all, commanders support the programs; (3) required sexual assault prevention and response training is not consistently effective; and (4) factors such as a DOD-reported shortage of mental health care providers affect whether servicemembers who are victims of sexual assault can or do access mental health services. Left unchecked, these challenges can discourage or prevent some servicemembers from using the programs when needed. GAO found, based on responses to its nongeneralizable survey administered to 3,750 servicemembers stationed at military installations in the United States and overseas and a 2006 DOD survey, the most recent available, that occurrences of sexual assault may be exceeding the rates being reported, suggesting that DOD and the Coast Guard have only limited visibility over the incidence of these occurrences. At the 14 installations where GAO administered its survey, 103 servicemembers indicated that they had been sexually assaulted within the preceding 12 months. Of these, 52 servicemembers indicated that they did not report the sexual assault. GAO also found that factors that discourage servicemembers from reporting a sexual assault include the belief that nothing would be done; fear of ostracism, harassment, or ridicule; and concern that peers would gossip. Although DOD has established some mechanisms for overseeing reports of sexual assault, and the Coast Guard is beginning to do so, neither has developed an oversight framework--including clear objectives, milestones, performance measures, and criteria for measuring progress--to guide its efforts. In compliance with statutory requirements, DOD reports data on sexual assault incidents involving servicemembers to Congress annually. However, DOD's report does not include some data that would aid congressional oversight, such as why some sexual assaults could not be substantiated following an investigation. Further, the military services have not provided data that would facilitate oversight and enable DOD to conduct trend analyses. While the Coast Guard voluntarily provides data to DOD for inclusion in its report, this information is not provided to Congress because there is no requirement to do so. To provide further oversight of DOD's programs, Congress, in 2004, directed the Defense Task Force on Sexual Assault in the Military Services to conduct an examination of matters relating to sexual assault in the Armed Forces. However, as of July 2008, the task force had not yet begun its review. Without an oversight framework, as well as more complete data, decision makers in DOD, the Coast Guard, and Congress lack information they need to evaluate the effectiveness of the programs.
|
Lebanon is a small, religiously diverse country on the Mediterranean Sea that borders Israel and Syria (see fig. 1). Religious tensions among Lebanon’s Maronite Christians, Sunni Muslims, and Shiite Muslims, and others, along with an influx of Palestinian refugees, for decades have underpinned Lebanon’s internal conflicts as well as its conflicts with neighboring countries. Hezbollah emerged in Lebanon as a powerful Islamic militant group and since 2005, a member of Hezbollah has held a cabinet position in the Lebanese government. Hezbollah is funded by Iran and has been designated by the United States and Israel as a terrorist organization. In the summer of 2006, Hezbollah and Israel entered into a month-long conflict that ended with the adoption of United Nations Resolution 1701 by both the Israeli and Lebanese governments. The resolution called for, among other things, Israeli withdrawal from southern Lebanon in parallel with the deployment of Lebanese and United Nations forces and the disarmament of all armed groups in Lebanon. Instability arising from the civil war in neighboring Syria that began in 2011 has exacerbated sectarian conflict within Lebanon. In May 2013, Hezbollah leaders confirmed their intervention in the Syrian conflict. Figure 2 presents a timeline of selected political events in Lebanon. Since the end of the 2006 Israeli-Hezbollah war, the United States has kept strategic goals for Lebanon constant. These goals are to support Lebanon as a stable, secure, and independent democracy. The overarching priorities of U.S. assistance programs for Lebanon focus on supporting Lebanese sovereignty and stability and countering the influence of Syria and Iran. Security-related goals for Lebanon focus on counterterrorism and regional stability or internal security, and corresponding activities seek to support development of the LAF and the ISF as the only legitimate providers of Lebanon’s security. The United States has provided security equipment and training to the LAF, which is generally responsible for providing border security, counterterrorism, and national defense, and to the ISF, or national police force, which is generally responsible for maintaining law and order in Lebanon. In 1996, Congress amended the Arms Export Control Act of 1976, which authorizes the President to control the sale or export of defense articles and services, to require the President to establish a program for monitoring the end-use of defense articles and defense services sold, leased, or exported under the act, including through Foreign Military Sales (FMS) and direct commercial sales, or the Foreign Assistance Act of 1961. The amendment specified that the program should provide reasonable assurances that recipients comply with restrictions imposed by the U.S. government on the use, transfer, and security of defense articles and defense services. The President delegated responsibilities for the program to the Secretary of Defense, insofar as they relate to defense articles and defense services sold, leased, or transferred under FMS, and to the Secretary of State, insofar as they relate to commercial exports licensed under the Arms Export Control Act. For FMS, DOD’s Defense Security Cooperation Agency (DSCA) is responsible for end-use monitoring; for direct commercial sales, State’s Directorate of Defense Trade Controls is responsible for end-use monitoring. In addition to the end-use monitoring requirements under the Arms Export Control Act, the Foreign Assistance Act, as amended, directs the President to take all reasonable steps to ensure that aircraft and other equipment made available to foreign countries for international narcotics control under the Foreign Assistance Act are used only in ways that are consistent with the purposes for which such equipment was made available. State’s Bureau of International Narcotics Control and Law Enforcement Affairs (INL) has implemented this requirement by means of its End-Use Monitoring Program. To help ensure that U.S. assistance is not used to support human rights violators, Congress prohibits certain types of assistance from being provided to foreign security forces implicated in human rights abuses. Section 620M of the Foreign Assistance Act prohibits the United States from providing assistance under the Foreign Assistance Act or the Arms Export Control Act to any unit of a foreign country’s security forces if the Secretary of State has credible information that such unit has committed a gross violation of human rights. This provision is known colloquially as the State Leahy law. DOD’s annual appropriation contains a similar provision, known colloquially as the DOD Leahy law. The current version prohibits funds from being used to support training, equipment, or other assistance for security forces of a foreign country if the Secretary of Defense has received credible information that the unit has committed a gross violation of human rights. DOD, in consultation with State, must give full consideration to any credible information available to State relating to human rights violations by a unit of the foreign security forces before it conducts training for the unit. According to State, Leahy laws and the corresponding policies developed to enforce and supplement these laws are intended to leverage U.S. assistance to encourage foreign governments to prevent their security forces from committing human rights violations and to hold their forces accountable when violations occur. According to State, U.S. programs subject to the Leahy laws in Lebanon include Foreign Military Financing; International Narcotics Control and Law Enforcement (INCLE); Nonproliferation, Antiterrorism, Demining, and Related programs; and Sections 1206 and 1207 authorities. See appendix II for additional information on the U.S. human rights vetting process. The United States allocated $671 million for security-related assistance for Lebanon from fiscal year 2009 through fiscal year 2013. Of these total allocated funds, $477 million, or 71 percent, had been disbursed or committed by the end of fiscal year 2013. Nearly all of the allocations made in fiscal years 2009 through 2011 had been disbursed or committed. Since 2007, the United States has provided security-related assistance for Lebanon through the Foreign Military Financing program; the International Military Education and Training program; the INCLE program; the Nonproliferation, Antiterrorism, Demining, and Related programs; and Section 1206 and 1207 authorities for training and equipping foreign militaries and security forces and for reconstruction, stabilization, and security activities in foreign countries, respectively. For the largest program, Foreign Military Financing, DOD had committed about $352 million of the $481 million allocated from fiscal years 2009 through 2013. Table 1 presents the amounts of funds allocated, committed, or disbursed to these programs for Lebanon in fiscal years 2009 through 2013. Appendix III provides additional information on the status of these funds. Table 2 describes the U.S. security-related assistance programs for Lebanon and their goals, and identifies the agencies that implement them. We provide examples of the types of equipment some of these programs provide to the Lebanese security forces in the next section. DOD and State conduct end-use monitoring for equipment each has provided or authorized for sale to Lebanese security forces, but gaps in implementation of procedures may limit efforts to safeguard some equipment. DOD annually inventories sensitive equipment by serial number, as required by its policy; however, U.S. embassy officials in Beirut have not always used DOD’s required checklists to document compliance with security safeguards and accountability procedures. In addition, State officials in headquarters and at the U.S. Embassy in Beirut did not always document the results of end-use monitoring checks as specified in State guidance. Finally, State INL officials annually inventory all equipment INL has provided to the ISF, but INL may not be ensuring that the ISF is implementing recommended physical security safeguards for defense articles because INL lacks procedures to identify defense articles and the recommended safeguards for storing them. DOD officials conduct annual inventories as part of end-use monitoring through the Golden Sentry program, which is DOD’s program to comply with requirements of the Arms Export Control Act, as amended, related to the end use of defense articles and services. DOD personnel at U.S. missions worldwide conduct the monitoring activities established and overseen by DSCA. Under this program, DOD conducts two levels of monitoring: routine end-use monitoring and enhanced end-use monitoring. Routine end-use monitoring: DOD conducts routine end-use monitoring for defense articles and services sold through FMS that do not have any unique conditions associated with their transfer. Routine end-use monitoring is conducted in conjunction with other required security-related duties. For example, U.S. officials might observe how a host country’s military is using U.S. equipment when visiting a military installation on other business. Enhanced end-use monitoring: DOD conducts enhanced end-use monitoring for defense services, technologies, or articles specifically identified as sensitive—such as night vision devices. DOD policy requires serial number inventories for defense articles requiring enhanced end-use monitoring following delivery of the articles and at regular intervals thereafter. In addition, Letters of Offer and Acceptance, the FMS purchase agreements authorizing the sale of an item, may contain specialized notes or provisos requiring the purchaser to adhere to certain physical security and accountability requirements. With respect to enhanced end-use monitoring, DSCA’s policy manual for end-use monitoring, the Security Assistance Management Manual, and the associated standard operating procedures for Beirut require DOD officials annually to conduct a physical inventory of 100 percent of designated defense articles, conduct physical security checks of facilities where the equipment is kept, and use and maintain records of equipment-specific checklists that outline the physical security requirements for Lebanese facilities that store the equipment. Officials of DOD’s Office of Defense Cooperation in Beirut told us that they annually conduct an inventory involving the inspection of serial numbers for 100 percent of defense articles requiring enhanced end-use monitoring. Such equipment includes certain types of missiles, night vision devices, sniper rifles, aircraft, and unmanned aerial vehicles. Figure 3 shows examples of defense articles provided to the LAF. During our visit to Beirut in July 2013, we determined that DOD and the LAF accounted for almost 100 percent of the defense items in our sample in two locations. To assess the extent to which DOD accounts for equipment provided to the LAF, we drew a random sample by serial number from DOD’s equipment inventory database for two locations we planned to visit during our fieldwork in Lebanon. The items in the sample included various types of night vision devices at one location and various types of night vision devices and a sniper rifle at the other location. Although the results cannot be generalized to these or other locations, they showed that DOD and the LAF accounted for almost 100 percent of the items in our sample in both locations. In addition, during our visit, DOD performed its 2013 inventory of 100 percent of the equipment provided to one of the LAF regiments, including some new equipment not previously inventoried. The DOD officials were able to confirm that all equipment was either physically present or accounted for through documentation. Figure 4 shows U.S. officials conducting the inventory of U.S.-provided equipment at an LAF facility. When U.S. travel restrictions due to security concerns prohibit U.S. embassy personnel from traveling to specific areas of Lebanon in which some LAF storage facilities are located, DOD officials in Beirut require the LAF to bring the equipment to a central location in Beirut to allow the DOD officials to conduct the annual inventory. DOD quarterly reports from 2011 through 2013 show 100 percent accountability for equipment provided to the LAF. As part of end-use monitoring, DSCA’s Security Assistance Management Manual and the standard operating procedures for Beirut require DOD officials to conduct physical security checks of LAF facilities, using a required checklist to document accountability and physical security of U.S.-provided equipment. While DOD’s quarterly reports from 2011 through 2013 show 100 percent accountability for equipment provided to the LAF, during our site visit in July 2013, we determined that DOD officials in Beirut were not using the required checklist to document compliance with physical security safeguards. The DOD manual specifically directs that enhanced end-use monitoring must be performed using checklists developed by the military departments. Furthermore, according to the manual, all such checks must be recorded, with the physical security and accountability checklists attached to the inventory records that must be maintained for 5 years. According to a DOD official in Beirut, at least 75 percent of the U.S.- provided equipment subject to end-use monitoring is held in locations where DOD can and has conducted inspections to verify compliance with physical security requirements. For the remaining facilities, DOD mitigates security challenges by other means. DOD officials stated that a U.S. embassy employee who is a Lebanese national had visited some locations that U.S. officials were not able to visit because of security concerns and had conducted physical security compliance checks at those locations. For each of the remaining facilities that neither a U.S. official nor this Lebanese employee could personally inspect, DOD also requires the LAF to submit a letter attesting that the appropriate physical security measures had been implemented. During our site visit in July 2013, we determined that when DOD officials in Beirut were able to conduct physical security checks, they were not using checklists for enhanced end-use monitoring required by DSCA’s Security Assistance Management Manual to ensure that security safeguards and accountability procedures are in place. The manual directs that enhanced end-use monitoring must be performed using checklists developed by the military departments. Furthermore, according to the manual, all such checks must be recorded, with the physical security and accountability checklists attached to the inventory records that must be maintained for 5 years. Our finding was consistent with the results of a 2011 DSCA compliance assessment visit, when DSCA officials also found that the DOD Office of Defense Cooperation in Beirut was not using the required checklists to verify Lebanese compliance with facility security requirements. In April 2012, U.S. officials in Beirut responded to the DSCA finding, stating that they planned to use the checklists during their 2012 annual inventory. In their response, they also stated that the requirement to use DSCA- provided checklists during inventories was captured in the post’s updated standard operating procedures for end-use monitoring. We followed up on this matter in September 2013 and found that, when presented examples of the checklists, DOD officials in Beirut said that they were not aware of the checklists. The officials, however, noted that the information requested in the checklists is the same type of information they enter into the Security Cooperation Information Portal, which is the system used to record end-use monitoring activities. However, the documentation of the end-use monitoring we observed did not include sufficient information on the physical security of the equipment. Use of the checklists would help ensure that DOD officials document compliance with physical security requirements for defense items transferred to the LAF. Without such documentation, it is not clear whether DOD officials verify physical security safeguards as required by DSCA’s manual. In December 2013, DOD officials acknowledged that there had been a gap in the Office of Defense Cooperation’s use of the checklists. State’s Directorate of Defense Trade Controls administers a program called Blue Lantern to conduct end-use monitoring for defense articles and defense services exported through direct commercial sales. Under the Blue Lantern program, U.S. embassy officials conduct end-use checks by means of a case-by-case review of export license applications against established criteria or “warning flags” for determining potential risks. Embassy officials primarily conduct two types of end-use monitoring checks: prelicense checks prior to issuance of a license and postshipment checks after an export has been approved and shipped. Prelicense checks: A prelicense check may be requested to (1) confirm the bona fides of an unfamiliar consignee or end-user; (2) ensure that details of a proposed transaction match those identified on a license application; (3) confirm that the end-user listed on the license application has ordered the items in question; (4) verify the security of facilities where items may be permanently or temporarily housed; and (5) help to ensure that the foreign party understands its responsibilities under U.S. regulations and law. Postshipment checks: The Directorate of Defense Trade Controls may request a postshipment check in order to (1) confirm that the party or parties named on the license received the licensed items exported from the United States; (2) determine whether the items have been or are being used in accordance with the provisions of that license; (3) identify any parties involved in the transaction that are not listed on the license application; and (4) determine the specific use and handling of the exported articles, or other issues related to the transaction. State’s Blue Lantern guidebook provides instructions to U.S. diplomatic posts on how to conduct Blue Lantern end-use monitoring, including reporting and documentation requirements. For example, the guide specifies the type of information that should be included in cables from the overseas posts in response to an inquiry from State. In fiscal years 2007 through 2012, State conducted 15 Blue Lantern end- use monitoring checks for equipment—mostly firearms for the ISF— authorized for export to Lebanon through direct commercial sales. State reported favorable results for 14 of the Blue Lantern checks and unfavorable results for one. On the basis of our review of State documents and interviews with U.S. officials in Washington, D.C., and Beirut, we determined that State officials in headquarters and at the U.S. Embassy in Beirut did not always document the results of the Blue Lantern checks consistent with guidance specified in State’s Blue Lantern guidebook. According to the guidebook, all Blue Lantern requests are sent by cable to the U.S. embassy or consulate in the country or countries involved in the transaction. The guidebook specifies that Blue Lantern cables prepared by the U.S. mission in response to the request should describe specific actions taken and results of the inquiry, including identification of persons interviewed; description of documents or equipment reviewed; difficulties incurred; degree of cooperation by the end-users, consignees, or both; and recommendations for State action, if appropriate. According to the guidebook, the U.S. mission should maintain detailed records regarding the Blue Lantern cases as a resource to facilitate future checks and to brief new staff on how to conduct the checks. To assess State’s compliance with its Blue Lantern guidance, we selected a judgmental sample of all 10 Blue Lantern checks for equipment exported or intended for export to Lebanon in fiscal years 2010 through 2012 and requested copies of the cables associated with those checks. State documented the details of 5 of the 10 checks in diplomatic cables and 1 in e-mail messages. For the 4 remaining Blue Lantern checks, State officials informed us that the requests from headquarters and the responses from the embassy were made by e-mail. We asked for copies of the e-mails, but State officials told us that the e-mails were no longer in State’s information system. A State official noted that the Blue Lantern database had case summaries based on the e-mails. However, the summary information in the database consisted of notes that did not always contain all the required information specified in State’s guidance. For example, the summary information for the cases in which the e-mails were no longer available showed that the embassy officials met with the ISF but did not identify the officials interviewed. Without the information specified in the guidebook, State may not have information it needs to fully inform future checks and train new staff. State said in February 2014 that the U. S. Embassy in Beirut has established a policy requiring all Blue Lantern responses to be sent via cable to Washington, D.C., in order to ensure that there is a permanent record of the Embassy's response to a Blue Lantern check. However, State did not provide support this statement. As required by INL’s End-Use Monitoring Program, INL officials conduct an annual inventory by serial number of equipment that INL has provided to the ISF. This equipment typically includes office furniture, vehicles, and computer equipment but may also include boats, training firearms, and police vehicles. (See fig. 5 for photographs of equipment INL provided to the ISF.) INL personnel said they had attached barcodes to the equipment and we observed them use a scanner to inventory the equipment. The 2012 INL inventory report for Lebanon shows that INL personnel inspected about 78 percent of 1,243 items provided to the Lebanese security forces. INL personnel may also inspect items by secondary means including comparing inventory records and consulting with host government officials. While INL officials annually inventory all equipment INL has provided to the ISF, INL may not be ensuring that the ISF is implementing recommended physical security safeguards for defense articles because INL lacks procedures to identify defense articles and the recommended safeguards for storing them. In May 2013, we asked INL officials if any of the equipment INL had provided to Lebanon were defense articles. Though INL officials first told us that they had provided no defense articles or any other controlled items to Lebanon, in August 2013 INL provided us with its formal response confirming that it had provided two defense articles to the ISF—night vision devices and ceramic plates for bullet proof vests. With regard to these two articles, we found the following: INL consulted with the directorate only one of the two times it sought to transfer defense articles to Lebanon during the period covered by our review. While INL did not have formal procedures for identifying defense articles, INL officials stated that their practice is to contact State’s Directorate of Defense Trade Controls, which licenses direct commercial sales of defense articles, when providing equipment pursuant to a certain license exception. In 2012, INL consulted with the directorate to confirm that no export license would be required for the provision of night vision devices to the Government of Lebanon. In the directorate’s memo back to INL confirming that no license was required, INL was informed of physical security safeguards that the directorate recommended that the INL should direct the ISF to implement as conditions for a transfer of night vision devices to the ISF. However, INL did not consult the directorate about ceramic plates, which INL provided for bullet proof vests that it transferred to the ISF in February 2013 under the license exception. This was the only other case that INL identified in which it provided a defense article to the ISF. Although INL did not consult with the directorate about the ceramic plates, INL officials said that INL would contact the directorate in the future about any defense articles it planned to transfer to the ISF. INL lacked written procedures to determine what physical security safeguards are recommended for defense articles. INL officials said that the ISF secures INL-provided equipment and guards the facilities where it is stored. According to embassy officials in Beirut, there are no INL requirements for inspecting the ISF facilities where U.S.- provided equipment is stored. However, when INL consulted the Directorate of Defense Trade Controls about the transfer of night vision devices to the ISF, the directorate recommended that INL place 10 conditions on the transfer. These conditions included 5 technical parameters setting limitations on the performance of key components of the night vision devices, such as the brightness and resolution of their image intensifier tubes, and 5 physical security safeguards that the ISF should implement, such as storing the devices in a secured, locked facility. State officials in Washington, D.C., said that, although the conditions communicated by the directorate to the INL were not requirements, embassy officials would, in the future, inspect ISF facilities to see that the recommended physical security safeguards were being implemented. Nonetheless, in the absence of written procedures, it is unclear what procedures INL would follow for future transfers of defense articles. Moreover, INL communicated the conditions of the transfer of night vision devices to the ISF only in September 2013, after transferring the equipment in August 2013. Communicating conditions for physical security safeguards to the ISF after delivery of the equipment could lessen INL’s ability to determine whether the ISF implements those safeguards. In response to our inquiries on its plans to address potential gaps in physical security safeguards for defense articles, INL stated in January 2014 that it would update its Acquisition Handbook to document new policies. State anticipates that changes to the handbook would require INL to inform the Directorate of Defense Trade Controls of INL’s decision to procure and provide equipment without an export license, as allowed under Section 38(b)(2) of the Arms Export Control Act. However, it is unclear whether these revisions would establish procedures to (1) identify when items are defense articles and (2) determine if physical security safeguards are recommended. State’s 2012 Guide to Vetting Policy and Process—a primary source of guidance for U.S. personnel responsible for implementing the Leahy laws—requires State officials in Washington, D.C., and overseas to vet for potential human rights violations all individuals or units nominated to receive training or equipment by checking their names against files, databases, and other sources of information. In detailing the steps for vetting, the guidance specifies that relevant data are to be entered into the International Vetting and Security Tracking (INVEST) database. INVEST is State’s official system of record for conducting Leahy vetting and recording compliance with the Leahy laws. State requires human rights vetting through INVEST unless the individual or unit has already been vetted within the prior year, according to a State official. State approves, suspends, or denies the requests for training as a result of its vetting efforts. Our review of State’s INVEST database showed that State vetted almost 10,000 individuals and units in Lebanon that applied for U.S. training from October 10, 2010, through April 30, 2013. The majority (77 percent), applied for INCLE training (see figure 6). By cross- checking a random sample of names from six rosters for U.S. training delivered to Lebanese security force individuals or units against State’s designated vetting database, we found that all the names in the sample had been vetted for human rights violations before the individuals received the training, as required. State approved training for approximately 93 percent of the candidates and disallowed training for the remaining 7 percent for whom the vetting process produced a suspension. While administrative reasons accounted for most of the suspensions, such as a name submitted with insufficient time for vetting clearance prior to the class start date, potential human rights violations accounted for a few suspensions. U.S. officials in Beirut communicated suspensions based on human rights violations to Lebanese government officials. Administrative reasons for some suspensions were not communicated to the Lebanese government because the U.S. officials did not think they were required to do so. In September 2013, we reported that State guidance to embassies on the human rights vetting process did not specify whether the requirement to inform foreign governments applies when a unit or individual is suspended from receiving assistance. Hence, we recommended that State provide clarifying guidance for implementing the duty-to-inform requirement of the State Leahy law, such as guidance on whether U.S. embassies should or should not notify a foreign government in cases of suspensions. On the basis of evidence presented in a June 2013 Human Rights Watch report on Lebanon, in August 2013, State issued its first denial for training to individuals from the Drugs Repression Bureau of the ISF and informed ISF about this denial. As a result of the denial, three Lebanese counternarcotics officials were not approved for a September 2013 training course. Officials of the U.S. Embassy in Beirut communicated the reasons for the denial to the ISF during the course of our review. On the basis of our cross-check analysis of a sample of names from six training rosters against INVEST data, we estimated that State vetted for potential human rights violations all Lebanese students who attended security forces training from October 10, 2010, through April 30, 2013. We selected a random sample of 118 names from training rosters from the Lebanese Armed Forces; INCLE; Antiterrorism Assistance and other Nonproliferation, Antiterrorism, Demining, and Related programs; and International Military Education and Training. We cross-checked those names with the names of all Lebanese candidates vetted for human rights violations through INVEST from October 10, 2010, through April 30, 2013. Results of our analysis show that all 118 names on the training rosters were also found in INVEST. Therefore, we estimate that 100 percent of the 7,104 Lebanese students that attended U.S. training during that period were vetted for human rights violations through INVEST before they received training (see table 3). The U.S. Embassy in Beirut also conducts human rights vetting for units that are to receive equipment. In November 2011, State agreed with our recommendation that it implement individual- and unit-level human rights vetting for all recipients of equipment. As of July 2013, Bureau of Democracy, Human Rights, and Labor officials said that vetting of units that receive equipment is increasing worldwide and that the bureau hopes to achieve State-wide concurrence implementing the recommendation. State guidance also requires embassies to develop standard operating procedures for human rights vetting. The U.S. Embassy in Beirut developed standard operating procedures to implement State guidance during the course of our review. In September 2013, we found that State does not monitor whether all U.S. embassies have developed standard operating procedures that address the Leahy laws’ requirements. Therefore, we recommended, and State agreed, to ensure that all U.S. embassies have standard operating procedures that address human rights vetting requirements in the Leahy laws. U.S. agencies have allocated hundreds of millions of dollars for equipment and training to the government of Lebanon as part of U.S. efforts to build partner capacity and address threats to U.S. interests. Such efforts remain a U.S. priority. Just as important to the interests of the United States and its partners are efforts to ensure that any provided security-related equipment or training assistance does not help those who wish to do harm to the United States or its partners. Hence, end-use monitoring and human rights vetting are two critical activities that the U.S. government employs in Lebanon and worldwide to prevent misuse of its security-related equipment and training assistance. However, existing implementation gaps may weaken efforts to safeguard some equipment from misuse. These gaps involve embassy officials not utilizing required checklists to verify Lebanese facilities’ safeguards for sensitive defense equipment and not documenting details of State’s end-use checks. Closing these gaps could help mitigate the impact of security-related travel restrictions on U.S. officials’ access to some locations and could also help raise the level of confidence that Lebanese security forces are complying with security requirements. Because end-use monitoring in Lebanon is intended to provide U.S. officials with visibility over the implementation of required safeguards, the above-mentioned gaps reduce the U.S. government’s confidence that safeguards are being properly implemented to prevent equipment from falling into the wrong hands. Furthermore, the absence of INL procedures for identifying defense articles erodes confidence that State officials are applying recommended end-use monitoring and security safeguards on defense equipment. On a more positive note, our analysis did confirm that 100 percent of the Lebanese recipients of U.S. training were vetted for human rights violations and that this system appears to be working as intended in Lebanon. To help ensure that U.S. agencies are in a better position to ensure adequate safeguards for and monitoring of sensitive equipment, we recommend that the Secretary of Defense take additional steps to ensure that Office of Defense Cooperation officials in Beirut use the checklists required during the physical security checks. In addition, we recommend that the Secretary of State take the following three actions: Direct the Directorate of Defense Trade Controls and U.S. officials overseas to maintain cables or e-mails as required by State guidance to document each Blue Lantern end-use check. Direct State bureaus transferring equipment to foreign security forces under security-related assistance programs to establish formal written procedures to identify whether items are defense articles. Direct State bureaus transferring equipment to foreign security forces under security-related assistance programs to establish formal written procedures to consult with the Directorate of Defense Trade Controls to determine if there are additional safeguards recommended for the transfer of the defense articles. We provided a draft of this report to DOD and State for comment. DOD and State provided written comments which are reprinted in appendixes IV and V, respectively. State also provided technical comments, which we have incorporated into the report, as appropriate. In their comments, DOD and State generally concurred with the report’s findings and recommendations. In its written comments, DOD noted that the Office of Defense Cooperation in Beirut has started taking steps to address the recommendation. In its written comments, State noted that while it agreed with our recommendation that it maintain records of cables and e-mails that document each Blue Lantern end-use check, it disagreed with our finding that it did not adequately do so with regard to Lebanon. State noted that it did document key findings of the four inquiries in the Blue Lantern database, as these cases were adjudicated together as a group involving the same foreign consignee and Lebanese Security Forces. However, as we pointed out in the draft report, the summary information in State’s database consisted of notes that did not always contain all the required information specified in State’s guidance. For example, the summary information on the four cases showed that the embassy officials met with the ISF but did not identify the officials interviewed. State also noted that the draft referred to defense articles that were "exported" to Lebanon rather than “authorized” for export. We have revised the report accordingly. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until five days from the report date. At that time, we will send copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretary of State; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this review were to assess the extent to which the U.S. government (1) disbursed or committed funds allocated for Lebanese security forces in fiscal years 2009 through 2013, (2) implemented end- use monitoring for equipment transferred to Lebanese security forces, and (3) vetted Lebanese recipients of U.S. security-related training for human rights violations. To address these objectives, we obtained funding data from the Department of Defense (DOD) and the Department of State (State) on programs that provide security-related assistance to Lebanon. We also analyzed documents from DOD and State, such as cables, manuals, handbooks, compliance and inventory reports, and data on U.S.-provided equipment, training, and end-use checks, among others. We interviewed U.S. officials in Washington, D.C.; at the U.S. Central Command in Tampa, Florida; and at the U.S. Embassy in Beirut, Lebanon, as well as officials of the Lebanese Armed Forces (LAF) and Internal Security Forces (ISF) in Beirut, Lebanon. To assess the extent to which the U.S. government has disbursed or committed security-related assistance funding for Lebanon’s security forces since fiscal year 2009, we requested data from State and DOD. The funding data we report represent the balances as of September 30, 2013. State provided data on the status of allocations, obligations, unobligated balances, and disbursements for all of the funding accounts that support security-related assistance in Lebanon: Foreign Military Financing; International Narcotics Control and Law Enforcement; International Military Education and Training; and Nonproliferation, Antiterrorism, Demining and Related Programs; and Section 1206 and 1207 authorities. State collected the data directly from each bureau for State-implemented accounts and from DOD for Foreign Military Financing and International Military Education and Training. However, because Foreign Military Financing funds are budgeted and tracked in a different way than other foreign assistance accounts, DOD provided us with data on commitments. Recognizing that different agencies and bureaus may use slightly different accounting terms, we provided each agency with the definitions from GAO’s A Glossary of Terms Used in the Federal Budget Process and requested that each agency provide the relevant data according to those definitions. We also discussed the types of assistance provided with various officials of the LAF and the ISF. To assess the reliability of the data provided, we requested and reviewed information from officials from each agency regarding the agency’s underlying financial data system(s) and the checks, controls, and reviews used to ensure the accuracy and reliability of the data provided. We determined that the data provided were sufficiently reliable for the purposes of this report. To assess the extent to which the U.S. government has implemented end-use monitoring for equipment provided to Lebanese security forces, we reviewed relevant laws and regulations, DOD and State policy guidance, and reports and other documents; analyzed equipment and end-use monitoring data and reports; and interviewed officials from DOD, State, the LAF, and the ISF. For DOD’s Golden Sentry program, we reviewed policy and guidance documents, including the Defense Security Cooperation Agency’s Security Assistance Management Manual, the Office of Defense Cooperation-Beirut’s Standard Operating Procedures, the Security Cooperation Information Portal End-Use Monitoring Customer Assistance Handbook, and U.S. Central Command regulations. We also obtained and analyzed the 2012 Compliance Assessment Visit to Lebanon report, U.S. Central Command Inspector General Report for the Office of Defense Cooperation-Beirut, annual end-use monitoring reports for Lebanon, reports of lost or destroyed U.S. equipment provided to Lebanon, Lebanese compliance plans for safeguarding U.S. equipment, and security checklists required for enhanced end-use equipment in Lebanon. In addition, we reviewed and analyzed data and management reports on the equipment provided to Lebanon and end-use checks. Specifically, we reviewed management reports from DOD’s Security Cooperation Information Portal database, including delinquent, reconciliation, ad-hoc, and trend reports. We used the portal database to identify defense articles provided to Lebanon that require routine and enhanced end-use monitoring and the compliance actions taken for the items. We compared the data on defense articles and end-use monitoring to the various management reports and found the data to be sufficiently reliable for our purposes. Using the data provided by DOD, we drew a nongeneralizable random sample of 30 items for physical inspection out of 184 items that required enhanced end-use monitoring at two locations in Beirut, Lebanon. During our fieldwork at one of the locations, we observed DOD officials conducting enhanced end-use monitoring checks. In addition, we interviewed DOD officials with the Defense Security Cooperation Agency, U.S. Central Command, U.S. Special Operations Command, and the U.S. Embassy in Beirut. In Lebanon, we also met with officials from the LAF and the ISF. For State’s Blue Lantern program, we reviewed the Blue Lantern guidebook. We reviewed and analyzed Direct Commercial Sales license data, and Blue Lantern checks for Lebanon from fiscal years 2007 through 2012. Although State conducted 15 checks between fiscal years 2007 and 2012, we analyzed only the 10 Blue Lantern checks conducted in fiscal years 2011 and 2012. We limited our analysis to checks during these years to increase the likelihood that the embassy officials who conducted these checks would still be in their current positions, thereby enabling further discussion about the specific details of the checks. We requested both outgoing and responding cables for each of the 10 checks; State was unable to provide cables or email communication for 4 of the 10 Blue Lantern checks. Based on summary information provided for each check and cables and e-mail correspondence on 7 of the 10 checks, we analyzed and recorded information about each case, including the subject of the check, the commodity checked, license conditions, evidence that site visits were or were not requested and conducted, inventories requested and conducted, and any follow-up that post indicated was necessary. We determined the Blue Lantern data to be sufficiently reliable for our purposes. In addition, we interviewed State officials in the Bureau of Political and Military Affairs, the Bureau of Near Eastern Affairs, the Office of Foreign Assistance, and the Directorate of Defense Trade Controls in Washington, D.C., as well as officials at the U.S. Embassy in Beirut. For State’s Bureau of International Narcotics and Law Enforcement Affairs (INL), we reviewed policy and procedure guidelines for end-use monitoring. We obtained and analyzed inventory lists of equipment that INL provided to Lebanon’s security forces during fiscal years 2007 through 2012, data extracted from the Integrated Logistics Management System database. We received a demonstration on the use of this database to record annual inspection and inventory of equipment. In addition, we reviewed annual inspection reports submitted by embassy officials, Letters of Offer and Acceptance, and a sample transfer letter that contained inspection requirements. In addition, we toured two ISF facilities in Beirut, Lebanon, and observed INL-provided equipment. Lastly, we interviewed State officials of the Bureaus Political and Military Affairs, International Narcotics and Law Enforcement Affairs, and Near Eastern Affairs; the Office of Foreign Assistance; the Directorate of Defense Trade Controls; and the U.S. Embassy in Beirut. To assess the extent to which the U.S. government vetted Lebanese recipients of U.S. security-related training for potential human rights violations, we interviewed DOD and State officials, reviewed both agencies’ vetting guidance, and analyzed State documents from Lebanon on individuals and units vetted. We interviewed officials from State’s Bureau of Democracy, Human Rights, and Labor and Bureau of Near Eastern Affairs who are responsible for overseeing the human rights vetting process and answering questions from vetting personnel at the U.S. Embassy in Beirut. In Lebanon, we also met with U.S. embassy officials from DOD and State, and also relevant officials from the Department of Justice and the U.S. Agency for International Development to understand these other agencies’ roles in the human rights vetting process. We also met with representatives of the LAF and ISF to understand their familiarity with the Leahy laws and the U.S. human rights vetting process, as well as how the U.S. Embassy in Beirut communicates vetting results to them. In addition, we reviewed State’s human rights vetting guidance, including the Leahy human rights vetting guide; the International Vetting and Security Tracking (INVEST) user guide; multiple cables from State communicating directives to embassies regarding the implementation of the State and DOD Leahy laws; and a Joint Staff message issued by DOD in June 2004 that provided guidance on human rights verification for DOD-funded training of foreign security forces. Furthermore, we reviewed the State and DOD Leahy laws, as well as the U.S. Embassy in Beirut’s standard operating procedures. Lastly, to assess whether or not Lebanese students who attended training were previously vetted for potential human rights violations, we analyzed data from State’s INVEST database on almost 10,000 Lebanese individuals or units vetted from October 10, 2010, through April 30, 2013. We requested, obtained, and reviewed six training rosters with about 7,100 names of Lebanese students that attended training from October 1, 2010, to April 30, 2013, provided by these security-related assistance programs: International Military Education and Training; International Narcotics Control and Law Enforcement; and Nonproliferation, Antiterrorism, Demining, and Related programs. We selected a random sample of 118 Lebanese names that included representation from all rosters, with a minimum of 10 sampled names per roster. We provided this list of names to the Bureau of Democracy, Human Rights, and Labor and observed as a bureau official entered the first and last name into the search feature of INVEST. If the name was found, we then confirmed other fields, such as the date and description of the training, or the rank of the trainee. Despite some spelling variations, all 118 names of students were found in INVEST. Therefore, we estimate that 100 percent of the Lebanese recipients of training were vetted. The 95 percent margin of error on this estimate is 3 percentage points. We conducted this performance audit from April 2013 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To address both the State and DOD Leahy laws and determine whether there is credible information of a gross violation of human rights, State has established a U.S. human rights vetting process. The State-led process, as illustrated in figure 7, consists of vetting by personnel representing selected agencies and State offices at (1) the U.S. Embassy in Beirut and State headquarters in Washington, D.C.; (2) State’s Bureau of Democracy, Human Rights, and Labor; and (3) State’s Bureau of Near Eastern Affairs. The personnel involved in the vetting process screen prospective recipients of assistance by searching relevant files, databases, and other sources of information for credible information about gross violations of human rights. State processes, documents, and tracks human rights vetting requests and results through its International Vetting and Security Tracking system (INVEST), a web-based database. The Bureau of Democracy, Human Rights, and Labor is responsible for overseeing the vetting process and for developing human rights vetting polices, among other duties. The Department of Defense (DOD) and Department of State (State) allocated $671 million in security-related assistance for Lebanon in fiscal years 2009 through 2013, with funds varying by year and program. The agencies utilized eight programs to provide this security-related assistance: Foreign Military Financing, International Narcotics Control and Law Enforcement, Section 1206, International Military Education and Training, Antiterrorism Assistance, Counterterrorism Financing, Export Control and Related Border Security, and Section 1207. See figures 8 through 15 for details on allocation, obligation, and disbursement, or commitment of funds for each program’s security-related assistance provided to Lebanon in fiscal years 2009 through 2013. Data are as of September 30, 2013. In addition to the contact named above, Jeff Phillips (Assistant Director), Claude Adrien, Jenna Beveridge, David Dayton, Justin Fisher, Julia Ann Roberts, and La Verne Tharpes made key contributions to this report. Martin de Alteriis, Jeff Isaacs, Grace Lui, and Etana Finkler provided additional technical assistance.
|
Since 2009, the United States has allocated $671 million in security-related assistance for Lebanon to train, modernize, and equip the Lebanese Armed Forces and Internal Security Forces. The U.S. government established end-use monitoring programs to ensure that defense equipment is safeguarded. The Foreign Assistance Act prohibits certain assistance to any unit of foreign security forces if the Secretary of State has credible information that such a unit has committed a gross violation of human rights. GAO was asked to examine U.S. security-related assistance for Lebanon. This report assesses the extent to which the U.S. government (1) disbursed or committed funds allocated for Lebanese security forces in fiscal years 2009 through 2013, (2) implemented end-use monitoring for equipment transferred to Lebanese security forces, and (3) vetted Lebanese recipients of U.S. security-related training for human rights violations. To address these objectives, GAO reviewed laws and regulations, analyzed agency data, and interviewed officials in Washington, D.C., and Beirut, Lebanon. The United States allocated $671 million for Lebanese security forces from fiscal year 2009 through fiscal year 2013. Of these total allocated funds, $477 million, or 71 percent, had been disbursed or committed by the end of fiscal year 2013. Nearly all of the allocations made in fiscal years 2009 through 2011 had been disbursed or committed. For the largest program, Foreign Military Financing, the Department of Defense (DOD) had committed about $352 million of the $481 million allocated in fiscal years 2009 through 2013. Consistent with end-use monitoring requirements, DOD and the Department of State (State) conduct annual inventories for equipment transferred to Lebanese security forces. However, GAO found gaps in efforts to document and monitor physical security of some U.S. equipment transferred to Lebanese security forces that may weaken efforts to safeguard physical security of some equipment. First, while DOD annually inventories sensitive equipment by serial number, as required by its policy, U.S embassy officials in Beirut have not always used DOD's required checklists to document compliance with physical security safeguards. Second, State did not fully document 4 of the 10 end-use monitoring checks it conducted in fiscal years 2011 and 2012 for defense equipment the Lebanese government purchased commercially. State end-use monitoring guidance requires that specific information be documented and maintained for all such checks. Third, while State conducts an annual inventory of the equipment it transfers to the Lebanese security forces, State may not be ensuring that the recipients of defense articles implement recommended physical security safeguards because State lacks procedures to identify defense articles and any recommended safeguards for storing them. Examples of Items Subject to End-Use Monitoring in Lebanon GAO estimates that State vetted 100 percent of the Lebanese students that attended U.S.-funded security-related training from October 10, 2010, through April 30, 2013, for human rights violations. On the basis of a cross-check analysis of vetting data and a sample of names from six training rosters, GAO estimates that State vetted all of the 7,104 Lebanese students that attended training during that period. GAO recommends that DOD ensure U.S. officials use required checklists to confirm Lebanese facilities' compliance with required safeguards, and that State maintain proper documentation of end-use checks and establish formal procedures to ensure that bureaus identify defense articles and any recommended physical security safeguards for storing them. DOD and State concurred.
|
The RFS, as defined by EISA, distinguishes between ethanol derived from corn starch (known as corn ethanol) and advanced biofuels—defined as a renewable fuel other than corn ethanol that meets certain criteria. For example, to qualify as an advanced biofuel, a biofuel must reduce lifecycle greenhouse gas emissions by at least 50 percent compared to the gasoline or diesel fuel it displaces. According to the RFS, most advanced biofuels must be produced from cellulosic materials, which can include perennial grasses, crop residue, and the branches and leaves of trees. In addition, some advanced biofuels must be produced from biomass-based diesel, which generally includes any diesel made from biomass feedstocks, such as soybeans. As shown in figure 1, the volume of corn ethanol included under the RFS is capped at 15 billion gallons by 2015 and is fixed thereafter. However, the volume of advanced biofuels continues to grow to a total of 21 billion gallons by 2022. By comparison, the U.S. transportation sector consumed about 14 million barrels of oil per day in 2009, which translates to more than 99 billion gallons of gasoline consumed for the entire year. The RFS generally requires that U.S. transportation fuels in 2022 contain 36 billion gallons of biofuels. In addition, at least 16 billion of the 36 billion gallons of biofuels must be cellulosic biofuels—including ethanol and diesel derived from cellulosic materials. However, under EISA, EPA is required to determine the projected available volume of cellulosic biofuel production for the year, and if that number is less than the volume specified in the statute, EPA must lower the standard accordingly. Pursuant to this provision, EPA has already lowered the RFS requirements for cellulosic biofuel, from 250 million gallons to 6.6 million gallons for 2011, mostly due to the small number of companies with the potential to produce cellulosic biofuel on a commercial scale. As shown in figure 2, the infrastructure used to transport petroleum fuels from refineries to wholesale terminals in the United States is different from that used to transport ethanol. Petroleum-based fuel is primarily transported from refineries to terminals by pipeline. In contrast, ethanol is transported to terminals via a combination of rail cars, tanker trucks, and barges. According to DOE estimates, there are approximately 1,050 terminals in the United States that handle gasoline and other petroleum products. At the terminals, most ethanol is currently blended as an additive in gasoline to make E10 fuel blends. A relatively small volume is also blended into a blend of between 70 percent to 83 percent ethanol (E85) and the remainder gasoline. E85 has a more limited market, primarily in the upper Midwest, and can only be used in flexible-fuel vehicles, which are vehicles that have been manufactured or modified to accept it. After blending, the fuel is moved to retail fueling locations in tanker trucks. There are approximately 159,000 retail fueling outlets in the United States, according to 2010 industry data. This total included more than 115,000 convenience stores, which sold the vast majority of all the fuel purchased in the United States, according to industry estimates; a number of large retailers that sell fuel, such as Walmart, Costco, and several grocery chains; and some very low-volume retailers, such as marinas. In terms of ownership, single-store businesses—that is, businesses that own a single retail outlet—account for about 56 percent of the convenience stores selling fuel in the United States. There are three primary supply arrangements between fuel retailers and their suppliers: Major oil owned and operated. About 1 percent (or 1,175) of convenience stores selling fuel in the United States are owned and operated by four major integrated oil companies—ExxonMobil, Chevron, BP, and Shell. Branded independent retailer. About 52 percent of retail fueling outlets are operated by independent business owners who sell fuel under the brand of one of the major oil companies or refineries (such as CITGO, Sunoco, or Marathon). These retailers sign a supply and marketing contract with their supplier to sell fuel under the brand of that supplier. Unbranded independent retailer. The remaining retail fueling outlets (about 48 percent) are operated by independent business owners who do not sell gasoline under a brand owned or controlled by a refining company. These retailers purchase gasoline from the unbranded wholesale market, which is made up of gallons not dedicated to fulfill a refiner’s contracts with branded retailers. Federal safety and environmental regulations govern the dispensing and storage of fuel at retail fueling locations. First, OSHA requires that equipment used to dispense gasoline—including hoses, nozzles, and other related aboveground components, shown in figure 3—be certified for safety by a nationally recognized testing laboratory. According to OSHA officials, OSHA recognizes 17 laboratories, although Underwriters Laboratories (UL) is the main one that currently certifies equipment sold for dispensing gasoline. In addition, under the Solid Waste Disposal Act, EPA requires that underground storage tank (UST) systems—including storage tanks, piping, pumps, and other related underground components, shown in figure 3—must be compatible with the substance stored in them to protect groundwater from releases from these systems. Historically, UL certification has been the primary method for determining the compatibility of USTs with EPA requirements. EPA also requires fuel retailers to install equipment to detect leaks from UST systems. In total, EPA regulates approximately 600,000 active USTs at about 215,000 sites in the United States. State and local governments also play a role in regulating the safety of dispensing equipment and in implementing EPA’s requirements for USTs. For example: The Occupational Safety and Health Act allows states to develop and operate their own job safety and health programs. OSHA approves and monitors state programs and plans, which must adopt and enforce standards that are at least as effective as comparable federal standards. According to OSHA officials, there are currently 21 states with approved plans covering the private sector that enforce health and safety standards over the dispensing of gasoline within their respective states. Four additional states operate approved state plans that are limited in coverage to the public sector. Various state and local fire-safety codes—which aim to protect against fires—also govern the dispensing of fuel at retail fueling outlets. While state fire marshals or state legislatures are usually responsible for developing the fire code for their respective states, some states allow local municipalities to develop their own fire codes. Fire codes normally reference or incorporate standards developed by recognized standards- development organizations, such as the National Fire Protection Association and the International Code Council. State, county, and local fire marshals are responsible for enforcing the applicable fire code within their respective jurisdictions. Local officials, such as fire marshals, typically inspect dispensing equipment for compliance with both state and local fire codes. States are largely responsible for implementing EPA’s requirements under its UST program. EPA has approved 36 states, plus the District of Columbia and Puerto Rico, to operate programs in lieu of the federal program. The remaining states have agreements with EPA to be the primary implementing agency for their programs. Typically, states rely on UL certification as the primary method for determining the compatibility of UST systems with EPA requirements. Some states also allow compatibility to be demonstrated in other ways, including through the manufacturer’s approval or a professional engineering certification. Consumers in the United States use retail fueling locations to fuel hundreds of millions of automobiles and nonroad products with gasoline engines. According to DOT data, Americans owned or operated almost 256 million automobiles, trucks, and other highway vehicles in 2008, while about 91 percent of all households owned at least 1 automobile the same year, according to U.S. Census data. Americans also owned and operated over 400 million products with nonroad engines in 2009, according to one industry association estimate. According to EPA documentation, nonroad engines are typically more basic in their engine design and control than engines and emissions control systems used in automobiles, and commonly have carbureted fuel systems and air cooling, whereby extra fuel is used in combustion to help control combustion and exhaust temperatures. According to representatives from industry associations for nonroad engines, most of the small nonroad engines manufactured today rely on older technologies and designs to keep retail costs low, and all of the small nonroad engines currently being produced are designed to perform successfully on fuel blends up to E10. According to industry representatives, while it is possible to design small nonroad engines to run on a broad range of fuels, such designs would not be cost effective and could add hundreds of dollars to the price. Existing ethanol infrastructure should be sufficient to transport the nation’s ethanol production through 2015, according to DOT officials and industry representatives, but large investments in transportation infrastructure may be needed to meet 2022 projected consumption, according to EPA documentation. One option for doing so may be to construct a dedicated ethanol pipeline, but this option presents significant challenges. According to knowledgeable DOT officials and industry representatives we met with, the existing rail, truck, and barge transportation infrastructure for shipping corn ethanol to wholesale markets should be sufficient through 2015, when the volume of corn ethanol in the RFS is effectively capped at 15 billion gallons annually. This volume represents roughly a 2.4 billion gallon increase from 2011 RFS consumption targets for corn ethanol. Specifically, for rail, which transports about 66 percent of corn ethanol to wholesale markets, several DOT officials and representatives from the Association of American Railroads told us that the addition of a few billion gallons of ethanol over the near term is not expected to have a significant impact. Railroads hauled more than 220,000 rail carloads of ethanol in 2008 (the most recent year for which data are available)—which was about 0.7 percent of all the rail carloads and about 1 percent of the total rail tonnage transported that year in the United States, according to data from the Association of American Railroads. Similarly, knowledgeable DOT officials and industry representatives said there is sufficient capacity in the short term to transport additional volumes of corn ethanol via trucks, which transport about 29 percent of corn ethanol to wholesale markets, and barges, which transport roughly 5 percent, to meet RFS requirements. In contrast, the existing infrastructure may not be sufficient to handle the ethanol production that is projected after 2015. The RFS generally requires transportation fuels in the United States to contain 21 billion gallons of advanced biofuels, including a large quantity of cellulosic ethanol, by 2022. In a 2010 regulatory impact analysis, EPA assessed the impacts of an increase in the production, distribution, and use of ethanol and other biofuels sufficient to meet this requirement. In its assessment, EPA used three scenarios or “control cases” to project the amounts and types of renewable fuels to be produced domestically and imported from 2010 through 2022. Under its “primary” control case, EPA projected that by 2022, the United States would produce and import over 22 billion gallons of ethanol, comprising 15 billion gallons of domestically produced corn ethanol, almost 5 billion gallons of domestically produced cellulosic ethanol, and over 2 billion gallons of imported ethanol. EPA also estimated the number of facilities that would need to be built or modified, as well as the number of additional vehicles that would need to be purchased. Under its primary control case, EPA estimated that the necessary spending on transportation infrastructure due to increased ethanol consumption would be approximately $2.6 billion. According to EPA’s analysis: For rail. EPA estimated that approximately $1.2 billion would be needed for an additional 8,450 rail tanker cars ($760 million) and the construction of new train facilities ($446 million). EPA projected that biofuels transport will constitute approximately 0.4 percent of the total tonnage for all commodities transported by the freight rail system through 2022. Sixteen percent of the nation’s freight rail system would be affected by biofuels shipments, and that portion (mostly along rail corridors radiating out of the Midwest) would see a 2.5 percent increase in traffic. For trucks. EPA estimated that approximately $87 million would be needed for an additional 480 tank trucks. For barges. EPA estimated that approximately $198 million would be needed for an additional 32 barges ($45 million), and the configuration of barge facilities (a projected $153 million). EPA stated that it does not anticipate a substantial fraction of biofuels will be transported via barge over the inland waterway system. In addition, the agency projected that a total of 30 ports will receive significant quantities of imported ethanol from Brazil and Caribbean Basin Initiative countries by 2022. For wholesale terminals. EPA estimated that $1.15 billion in investments would be needed, primarily to modify vapor recovery equipment (at a cost of $1 million for each terminal that does not already handle ethanol). Other modifications would include the installation of new storage tanks, modification of existing tanks, and modification of tank-truck unloading facilities. EPA stated that the United States will face significant challenges in accommodating the projected increases in biofuels production by 2022, but it concluded that the task would be achievable at the wholesale level. For example, the agency stated that it believed overall freight-rail capacity would not be a limiting factor to the successful implementation of RFS requirements. However, while this task may be achievable, it is likely to be increasingly difficult because of congestion on U.S. transportation networks. We and others have reported that congestion is constraining the capacity and increasing the costs of U.S. rail and highway transportation. For example, we reported in 2008 that neither rail nor highway capacity had kept pace with recent increases in demand, leading to increased costs. We also cited a study by the Association of American Railroads, which predicted that without system improvements, the expected increases in rail volume by 2035 will cause 30 percent of primary rail corridors to operate above capacity and another 15 percent at capacity. The study stated the resulting congestion might affect the entire country and could shut down the national rail network. In addition, we noted that many of the highways used heavily by trucks to move freight are already congested, and congestion is expected to become a regular occurrence on many intercity highways. Finally, we noted that ports are likely to experience greater congestion in the future as more and larger ships compete for limited berths. If overall ethanol production increases enough to fully meet the RFS over the long term, one option to transport it to wholesale markets would be through a dedicated ethanol pipeline. Over many decades, the United States has established very efficient networks of pipelines that move large volumes of petroleum-based fuels from production or import centers on the Gulf Coast and in the Northeast to distribution terminals along the coasts. However, the existing networks of petroleum pipelines are not well suited for the transport of billions of gallons of ethanol. Specifically, as shown in figure 4, ethanol is generally produced in the Midwest and needs to be shipped to the coasts, flowing roughly in the opposite direction of petroleum-based fuels. The location of renewable fuel production plants (such as biorefineries) is often dictated by the need to be close to the source of the raw materials and not by proximity to centers of fuel demand or existing petroleum pipelines. Existing petroleum pipelines can be used to ship ethanol in some areas of the country. For example, in December 2008, the U.S. pipeline operator Kinder Morgan began transporting commercial batches of ethanol along with gasoline shipments in its 110-mile Central Florida Pipeline from Tampa to Orlando. However, pipeline owners would face the same technical challenges and costs that Kinder Morgan representatives reported facing, including the following: Compatibility. Ethanol can dissolve dirt, rust, or hydrocarbon residues in a petroleum pipeline and degrade the quality of the fuel being shipped. It can also damage critical nonmetallic components, including gaskets and seals, which can cause leaks. In order for existing pipelines to transport ethanol, pipeline operators would need to chemically remove residues and replace any components that are not compatible with ethanol. According to DOT officials, the results from two research projects sponsored by that agency have identified specific actions that must be taken on a wide variety of nonmetallic components commonly utilized by the pipeline industry. Stress corrosion cracking. Tensile stress and a corrosive environment can combine to crack steel. The presence of ethanol increases the likelihood of this in petroleum pipelines. Over the past 2 decades, approximately 24 failures due to stress corrosion cracking have occurred in ethanol tanks and in production-facility piping having steel grades similar to those of petroleum pipelines. According to DOT officials, the results from nine research projects sponsored by that agency have targeted these challenges and produced guidelines and procedures to prevent or mitigate stress corrosion cracking. As a result, pipelines can safely transport ethanol after implementing the identified measures, according to DOT officials. Attraction of water. Ethanol attracts water. If even small amounts of water mix with gasoline-ethanol blends, the resulting mixture cannot be used as a fuel or easily separated into its constituents. The only options are additional refining or disposal. Some groups have proposed the construction of a new pipeline dedicated to the transportation of ethanol. For example, in February 2008, Magellan Midstream Partners, L.P. (Magellan) and Buckeye Partners, L.P. (Buckeye) proposed building a new pipeline from the Midwest to the East Coast. According to this proposal, the pipeline would gather ethanol from three segments: (1) Iowa, Nebraska, and South Dakota; (2) Illinois, Michigan, and Minnesota; and (3) Indiana and Ohio. Ethanol would be transported to demand centers in New England, the Mid-Atlantic, Virginia, and West Virginia. The federal government has studied the feasibility of building a pipeline similar to the one proposed by Magellan. Specifically, under section 243 of EISA, DOE (in collaboration with DOT) issued a study in March 2010 that examined the feasibility of constructing an ethanol pipeline linking large East Coast demand centers with refineries in the Midwest. The report identified a number of significant challenges to building a dedicated ethanol pipeline, including the following: Construction costs. Using recent trends in and generally accepted industry estimates for pipeline construction costs, DOE estimated that an ethanol pipeline from the Midwest to the East Coast could cost about $4.5 million per mile. While DOE assumed that the construction of 1,700 miles of pipeline would cost more than $3 billion, it did not model total project costs beyond $4.25 billion in the report. Higher transportation rates. Based on the assumed demand for ethanol in the East Coast service area and the estimated cost of construction, DOE estimated the ethanol pipeline would need to charge an average tariff of 28 cents per gallon, substantially more than the current average rate of 19 cents per gallon, for transporting ethanol using rail, barge, and truck along the same transportation corridor. Lack of eminent domain authority. DOE estimated that siting a new ethanol pipeline of any significant length will likely require federal eminent domain authority, which currently does not exist for ethanol pipelines. DOE’s report concluded that a dedicated ethanol pipeline can become a feasible option if there is (1) adequate demand for the ethanol (approximately 4.1 billion gallons per year for the hypothetical pipeline assessed) and (2) government financial incentives to help defray the large construction costs. We identified several challenges to selling intermediate ethanol blends at the retail level. First, federal and state regulations governing health and environmental concerns must be met before these blends are allowed into commerce, and fuel-testing requirements to meet these regulations may take 1 year or more to complete. Second, according to knowledgeable federal officials and UL representatives, federal safety standards do not allow ethanol blends over E10 to be dispensed at most retail fueling locations, and federally sponsored research has indicated potential problems with the compatibility of intermediate ethanol blends with existing dispensing equipment. Third, according to EPA and several industry representatives, the compatibility of many UST systems with these fuels is uncertain, and retailers will need to replace any components that are not compatible if they choose to store intermediate blends. Fourth, industry associations representing various groups, such as fuel retailers and refiners, are concerned that, in selling intermediate ethanol blends, fuel retailers may face significant costs and risks, such as upgrading or replacing equipment. According to knowledgeable EPA officials within the Office of Transportation and Air Quality, the regulatory process for allowing an intermediate ethanol blend into commerce could take 1 year or more. As described in table 1, the Clean Air Act, among other things, establishes a comprehensive regulatory program aimed at reducing harmful emissions from on- and off-road vehicles and engines and the fuels that power them. According to EPA officials, this regulatory program would apply to the introduction of new fuels, including E15 and other intermediate blends. Although intermediate ethanol blends higher than E15 would need to meet all of these requirements, E15 has already partly met the first two. EPA partially granted a fuel waiver allowing E15 for use in model year 2001 and newer automobiles, and EPA officials told us the agency has no plans to revise its regulations for certifying detergents for E15 because it currently has not determined any detergent-related issues different from E10. According to EPA officials, the remaining two requirements have not yet been completed for E15 but are in the process of being addressed, specifically: Health-effects testing similar to that performed for E10 could take 2 years or more to register intermediate ethanol blends, depending on variables such as the availability of testing laboratories. According to EPA officials, EPA received information on February 18, 2011, from an ethanol industry representative contending that the health-effects testing previously performed for E10 is an adequate substitute for E15. According to recent Congressional testimony, EPA expects to finish reviewing the information by the middle of 2011. EPA would have to update the regulations for its reformulated gasoline program, which do not currently allow fuel manufacturers to certify batches of gasoline containing greater than 10 percent ethanol by volume. In November 2010, EPA proposed a rule that would, among other things, update the model to allow for reformulated gasoline containing up to 15 percent ethanol by volume. According to EPA officials, EPA expects to issue a final rule sometime in 2011. In addition to federal regulations, many states have established regulations or statutes related to transportation fuels, according to a 2010 industry report. In particular, many state regulations or statutes contain references to specific industry standards for fuel published by a recognized standards development organization, including ASTM International and the National Institute of Standards and Technology (NIST), according to the report and knowledgeable NIST officials we interviewed. These standards, however, are only relevant to E10, and neither organization has published any standards related to the use of intermediate ethanol blends up to E85. Therefore, before allowing intermediate ethanol blends into commerce, the states that reference existing ASTM International or NIST standards would have to either (1) enact new statutes or regulations that no longer reference the existing standards or (2) wait for ASTM International or NIST to update their standards related to intermediate ethanol blends. Either option could take more than a year to implement, according to knowledgeable officials from NIST and the California Air Resources Board. In general, federal safety standards do not allow ethanol blends over E10 to be dispensed with existing equipment at most retail fueling locations. Specifically, OSHA requires that all equipment used to dispense gasoline be certified for safety by a nationally recognized testing laboratory. UL, the only such laboratory that has developed standards for certifying dispensing equipment, did not publish safety standards specifically for intermediate ethanol blends until August 2009, and no UL-certified dispensing equipment was available for use with these blends until 2010. Dispensing equipment manufactured earlier has been certified for blends up to E10, and UL does not recertify equipment that has already been certified to an existing UL standard, according to several UL representatives. Moreover, UL does not retroactively certify manufactured or installed equipment to new safety standards because it cannot monitor whether the equipment has been modified by, for example, aging or maintenance. As a result, according to knowledgeable OSHA officials and several UL representatives, the vast majority of existing retail dispensers in the United States are not approved for use with intermediate ethanol blends under OSHA’s safety regulations. Until recently, UL and OSHA were each exploring ways to allow fuel retailers to use existing dispensing equipment with intermediate ethanol blends while still meeting OSHA’s safety regulations. For example, in a February 2009 announcement, UL stated that existing dispensing equipment—certified for use with E10—could be used with blends containing up to 15 percent ethanol, based on data the company had collected. According to the announcement, UL did not find any significant incremental risk of damage to existing equipment between E10 and fuels with a maximum of 15 percent ethanol. In addition, several OSHA officials told us in November 2010 that the agency was at the early stages of evaluating several options—such as implementing a grace period on planned enforcement activities or developing an enhanced inspection and maintenance program for a limited time—that would allow existing dispensing equipment to be approved for use with E15. However, results from federally sponsored research indicate potential problems with the use of intermediate ethanol blends with some existing dispensing equipment. A DOE-commissioned report prepared by UL was issued in November 2010 on the compatibility of intermediate blends with new and used dispensing equipment certified for blends up to E10. According to the report, although various components generally performed well with the testing fluid, some of the components tested (including valve assemblies and nozzles) demonstrated a reduced level of safety, performance, or both when exposed to the testing fluid. This was mostly due to the failure of certain nonmetal components, such as gaskets and seals. In March 2011, DOE’s ORNL published a report stating that, although metal samples experienced very little corrosion, all elastomer samples (such as fluorocarbon, nitrile rubber, and polyurethane) exhibited some level of swelling and the potential to leak when exposed to testing fluids. This research has led UL and OSHA to reconsider support for the use of existing dispensing equipment with intermediate ethanol blends. In a December 2010 announcement based on this research, UL stated that it advised against the use of intermediate ethanol blends with dispensing equipment certified for E10 and, instead, recommended the use of new equipment designed and certified for use with intermediate ethanol blends. The announcement stated that UL was particularly concerned that blends over E10 could lead to the degradation of gaskets, seals, and hoses and could cause leaks. In addition, several OSHA officials told us that, as a result of this research, the agency is re-evaluating its plan to explore ways to allow fuel retailers, under certain conditions, to use existing dispensing equipment with intermediate blends. However, OSHA’s position on this issue remains unclear, and it is uncertain when the agency will establish a definitive position. On the one hand, according to several OSHA officials we talked with, the vast majority of existing retail dispensers in the United States are not approved for use with intermediate ethanol blends under OSHA’s safety regulations. On the other hand, these officials also stated that OSHA is still developing its position on the use of existing dispensing equipment with intermediate blends. While these officials said that strict enforcement of current OSHA requirements for dispensing equipment seems more like an option now, they did not provide any time frames for when OSHA would finalize its position, nor how it planned on communicating a decision to fuel retailers and other interested parties. According to our discussions with knowledgeable federal officials and several industry association representatives, the compatibility of many existing UST systems with intermediate ethanol blends is unclear for two main reasons—many fuel retailers have older equipment and lack records, and recent federally sponsored research indicates potential problems with the use of intermediate blends. Retail fueling outlets generally have two or more UST systems, according to industry association representatives, and each system contains a large number of components and materials. According to EPA documentation and knowledgeable EPA officials within the Office of Underground Storage Tanks, many existing USTs range in age from 1 to 40 years and contain components certified to a range of UL standards, which typically have evolved over time, or have been approved by the manufacturer for varying uses. Because these systems are buried underground, visually inspecting some components for compatibility is impossible without excavating them. Thus, fuel retailers, along with state and federal inspectors, primarily rely on recordkeeping to verify UST system compatibility with the fuel stored in them. However, inadequate recordkeeping may make it difficult for retailers with older stations to verify UST system compatibility with intermediate ethanol blends. For example, according to EPA documentation, knowledgeable EPA officials, and a representative from the Society of Independent Gasoline Marketers of America, many fuel retailers do not have complete records of all their UST equipment, particularly those with stations having several previous owners. Furthermore, many installation companies and component manufacturers may have gone out of business, according to EPA documentation, which could make verification particularly challenging. Recognizing this issue, EPA announced in November 2010 that it plans to issue guidance that would clarify its compatibility requirements for UST systems storing ethanol blends higher than 10 percent. In its announcement, EPA also solicited public feedback on the extent of the challenges fuel retailers face in demonstrating existing UST systems’ compatibility with intermediate ethanol blends and on alternatives that would sufficiently protect human health and the environment. EPA officials said the agency expects to issue guidance sometime in 2011. Determining compatibility may be important because ongoing federal research indicates potential problems with the use of intermediate ethanol blends with some UST components. For example, according to a recent DOE report and additional results from DOE research, certain elastomers, rubbers, and other materials used in UST systems may degrade or swell excessively when exposed to intermediate ethanol blends, becoming ineffective as gaskets or seals. DOE testing also indicates that a pipe- thread sealant commonly used in UST piping in the past is not compatible with any ethanol blends, which raises concerns that these components may leak when exposed to ethanol—even in lower blends, such as E10. According to the report, DOE expects to conclude this research in the near future. In addition, DOE officials said they do not expect to conduct additional research on UST components or equipment. However, important gaps exist in current federal research efforts in this area. For example, several officials within EPA’s Office of Underground Storage Tanks told us that DOE’s research efforts to date have focused only on testing materials (e.g., elastomers and rubbers) and not actual components and equipment (e.g., valves and tanks) found in UST systems. In addition, according to EPA officials, while the agency plans to study the compatibility of E15 with UST systems, this research will be based on interviews with experts and not on actual testing of materials, components, or equipment. Moreover, EPA officials characterized this research effort as more of a “modeling” or scoping effort to determine the extent of any potential problems. EPA officials stated that the ability to determine the compatibility of legacy equipment with intermediate blends is limited. Nevertheless, they acknowledged that additional research will be necessary to facilitate a transition to storing intermediate ethanol blends in UST systems, including the suitability of specific UST components with intermediate blends. EPA officials told us that they are working with industry officials and federal partners to understand the impact of intermediate blends in UST systems. However, to date EPA has not developed a plan to undertake such research. It is also unclear whether leak-detection equipment will properly detect leaks of intermediate ethanol blends. According to knowledgeable EPA officials and UL representatives, UL has not developed performance standards for leak-detection equipment used in UST systems. EPA officials explained that, while some leak-detection equipment has been approved by the manufacturer for the compatibility of its materials with intermediate ethanol blends, EPA is not certain whether the ethanol content of the fuel, in general, would affect the operability of this equipment. To address this potential problem, EPA is sponsoring research, in collaboration with manufacturers and other stakeholders, to determine which of these devices works properly with ethanol. EPA officials currently expect test results to be available by the end of 2011. According to several industry associations representing various groups, such as fuel retailers and refiners, many fuel retailers may face significant costs and risks in selling intermediate ethanol blends. According to these industry representatives, retailers make very little money selling fuel—for example, the national average profit from selling gasoline last year was 9 cents per gallon, according to industry data. Most retailers make most of their profit selling merchandise such as food, beverages, and tobacco products, according to these industry representatives, and gasoline is sold below cost in some markets to attract customers to buy more profitable goods. As a result, according to several industry representatives, most retailers do not upgrade their fuel-storage and -dispensing equipment without a significant market opportunity. For these fuel retailers, the prospect of selling intermediate ethanol blends presents several potential challenges. The first is cost. Some fuel retailers may have to spend hundreds of thousands of dollars to upgrade their equipment to store and dispense intermediate ethanol blends, for the following reasons: Under current OSHA regulations, most fuel retailers will need to replace at least one dispenser system to sell intermediate ethanol blends. According to estimates from EPA and several industry associations, installing a new dispenser system compatible with intermediate ethanol blends will cost over $20,000. According to some industry association representatives, a typical fuel retailer has four dispensers and, therefore, would face costs exceeding $80,000 to upgrade an entire retail facility. Fuel retailers with inadequate records of their UST systems may have to upgrade certain UST components to demonstrate compatibility with intermediate ethanol blends. According to some industry association representatives and information from DOE’s NREL, upgrading some components would be less expensive than installing an entirely new UST system. Taking this into consideration, EPA estimated an average cost of $25,000 per retail facility to make the needed changes to underground storage components. However, EPA cautioned that this cost scenario is very speculative, given that the costs of modifying underground components could vary greatly. According to EPA officials, most tank owners will be able to demonstrate compatibility by replacing certain portions of the UST system that are easily accessible (such as submersible pumps, tank probes, pipe dope, and overfill valves). The costs for these upgrades, including labor, can be as low as a few thousand dollars but may increase if more extensive upgrades are required. According to EPA and industry estimates, the total cost of installing a new single-tank UST system compatible with intermediate ethanol blends is more than $100,000. In addition to the high costs, some industry association representatives stated that fuel retailers who have recently installed new UST systems may be particularly reluctant to replace them, especially since UST warranties can last for several decades, and the useful life of these systems can be even longer. In Florida, for example, fuel retailers were required to replace or upgrade all single-wall USTs by December 31, 2009. A second potential challenge consists of financial and logistical limitations on the types of fuel a retailer may be able to sell. According to representatives from several industry associations, most retail fueling locations have only two UST systems, and many fuel retailers cannot install additional UST systems due to space constraints, permitting obstacles, or cost. Currently, fuel retailers with two UST systems can sell three grades of gasoline: regular, midgrade, and premium. To accomplish this, they typically use one of their tanks to store regular gasoline and the other for premium, both of which are preblended with up to 10 percent ethanol. They then use their dispensing equipment to blend fuel from both tanks into midgrade gasoline. If fuel retailers with two UST systems want to sell intermediate ethanol blends, however, they may face certain limitations. For example, fuel retailers with two UST systems who want to sell regular, midgrade, and premium gasoline could use the tanks to store regular and premium grades of an intermediate blend, such as E15. However, since EPA has only allowed E15 for use in model year 2001 and newer automobiles, these retailers would not be able to sell fuel to consumers for use in older automobiles and nonroad engines. A third potential challenge relates to legal uncertainty among industry groups, who are concerned they could be held liable for selling intermediate ethanol blends. For example, according to representatives we interviewed from several industry associations, fuel retailers have received conflicting or confusing messages from different authorities as to whether existing dispensing equipment can be lawfully used with intermediate ethanol blends. According to these industry representatives, this confusion is partly the result of UL’s 2009 announcement supporting the use of blends containing up to 15 percent ethanol with existing dispensing equipment. However, even if state or local officials—such as fire marshals—approve the use of intermediate blends with existing dispensers, the retailers selling these blends would still be effectively ignoring OSHA’s regulations, which require the use of equipment that has been certified for safety by a nationally recognized testing laboratory, such as UL. As a result, several industry representatives raised concerns that fuel retailers could expose themselves to lawsuits for negligence and invalidate important business agreements that may reference these safety requirements, such as tank insurance policies, state tank-fund policies, and business loan agreements. In addition, according to representatives from several industry associations we interviewed, many fuel retailers are concerned that consumer misfueling—using intermediate ethanol blends in nonapproved engines—could raise liability issues, especially if the misfueling is associated with negative outcomes, such as diminished engine performance and safety problems. Because EPA has only allowed E15 for use in model year 2001 and newer automobiles, representatives from several industry associations stated that consumers may not be aware of the distinction between approved and nonapproved engines, or they may be confused about which fuel to use, thus complicating their experience at retail fueling outlets and increasing opportunities for misfueling. According to some industry and state government representatives, since many automobile manufacturer warranties do not cover the use of intermediate ethanol blends, even for the model year vehicles approved by EPA for E15, consumers could be held responsible for the cost of any repairs attributed to the use of E15. One proposed method of mitigating the potential for misfueling is to label fuels at retail outlets. In November 2010, EPA issued proposed labeling requirements for ethanol blends as high as E15. According to its proposed requirements, EPA is coordinating with the Federal Trade Commission, which in March 2010 proposed labeling requirements for ethanol blends containing greater than 10 percent and less than 70 percent ethanol by volume. However, representatives from several industry associations have raised concerns that labeling will not adequately address potential misfueling. For example, some industry association representatives stated that some consumers will not understand the label, or the label might get lost among the other labels commonly found on dispensers. Furthermore, industry association representatives said some consumers will intentionally misfuel their automobiles if intermediate ethanol blends are cheaper. For example, industry association representatives stated some of their members have witnessed consumers using E85 in nonflex-fuel vehicles, presumably because E85 is cheaper than E10. With the possibility of introducing intermediate ethanol blends in the nation’s motor-fuel supply, DOE began to study the effects of these fuels in automobiles and nonroad engines in 2007. Specifically, in March 2007, DOE’s Office of Energy Efficiency and Renewable Energy convened a workshop of experts to evaluate progress and develop a strategy for meeting the Bush Administration’s “20 in 10” initiative. The goal of the initiative was to reduce U.S. gasoline usage by 20 percent over the next 10 years through increased use of alternative fuels and improved fuel economy. One conclusion from the workshop was that increasing the ethanol content in motor fuel to E15 or E20 would be the most effective strategy over the short term. However, based on a review of existing research, DOE’s ORNL found that almost no data existed on the effects of E15 on automobiles, while only limited data existed on the effects of E20. To address this data gap, DOE began working with EPA, the Coordinating Research Council, Inc. (CRC), and other groups in 2007 to develop a list of research projects to test the effects of E15 and E20 on automobiles and nonroad engines. DOE, EPA, and CRC have provided about $51 million in funding (for fiscal years 2007 through 2010) for ten research projects (see table 2). Of the six federally sponsored projects on automobiles, four projects are ongoing and are expected to be completed in 2011. Two projects have been completed—Project V1, which looked primarily at the effects of E15 and E20 on tailpipe emissions from automobiles, and Project V3, which looked primarily at the effects of E20 on evaporative emissions from automobiles. According to published reports, project findings included the following: Exhaust emissions. According to the 2009 DOE report for Project V1, regulated tailpipe emissions from 16 automobiles (including model years ranging from 1999 to 2007) remained largely unaffected by the ethanol content of the fuel. Increasing the ethanol content of the fuel, however, resulted in increased emission of ethanol and acetaldehyde. DOE has also released all of the testing data from Project V4, which is looking at emissions testing and aging on 82 automobiles (including model years ranging from 2000 to 2009). EPA based its decision to allow E15 for use in certain automobiles partly on these results. According to EPA’s decision, model year 2000 and older automobiles do not have the sophisticated emissions control systems of more recently manufactured automobiles, and there is an engineering basis to believe they may experience emissions increases if operated on E15. Fuel economy. According to DOE’s report for Project V1, ethanol has about 67 percent of the energy density of gasoline on a volumetric basis. As a result, automobiles running on intermediate ethanol blends exhibited a loss in fuel economy commensurate with the energy density of the fuel. Specifically, when compared to using gasoline containing no ethanol, the average reduction in fuel economy was 3.7 percent using E10, 5.3 percent using E15, and 7.7 percent using E20. Catalyst temperatures. According to the 2009 report for Project V1, 9 of the 16 automobiles adjusted their air-to-fuel ratio at full power to compensate for the increased oxygen content in the ethanol-blended fuel. In these cases, the catalyst temperatures at equivalent operating conditions were lower or unchanged with ethanol. Seven of the 16 tested automobiles failed to adequately adjust their air-to-fuel ratio for the increase in oxygen with E20 fuel compared with 100 percent gasoline at full power. As a result, catalyst temperatures for these automobiles at full power were between 29ºC and 35ºC higher with E20 relative to gasoline. According to the report, the long-term effect of this temperature increase on catalyst durability is unknown and requires further study. Evaporative emissions. According to its 2010 report for Project V3, CRC found that intermediate ethanol blends may increase evaporative permeation emissions—fuel-related emissions that do not come from the tailpipe—in older automobiles. CRC’s report was not based on statistically significant comparisons, but it noted certain trends—for example, compared to pure gasoline, E10 and E20 were associated with increased evaporative emissions. Of the four federally sponsored projects on nonroad engines, one (SE4) is ongoing, and one (SE3) has been canceled. According to DOE, the objective of Project SE4 is to determine the effects of E15 on the safety, performance, and emissions of several popular marine and snowmobile engines. The objective of Project SE3 was to assess the effects of intermediate ethanol blends, including E15, on the safety and performance of handheld small nonroad engines, including chainsaws. However, according to DOE officials, the department decided in the summer of 2010 to defer Project SE3 indefinitely because the Outdoor Power Equipment Institute—an industry association representing small nonroad engine manufacturers and DOE’s major partner on the project—declined to submit a proposal for conducting the testing. According to one official with the Institute, this decision was based, in part, on EPA’s indication that it would not allow E15 for use in small nonroad engines. The two federally sponsored projects on nonroad engines that have been completed—SE1 and SE2—were not conclusive, but indicated potential problems with the use of intermediate ethanol blends in small nonroad engines. Project SE1 was a pilot study of six commercial and residential small nonroad engines, and Project SE2 tested 22 engines over their full useful lives. According to the 2009 DOE report, the projects found that with increasing levels of ethanol: For all engines tested, exhaust and engine temperatures generally increased. Three handheld trimmers had higher idle speeds and experienced unintentional clutch engagement, which DOE laboratory officials identified as a potential safety concern that can be mitigated in some engines by adjusting the carburetor. For all engines tested, emissions of nitrogen oxides increased and emissions of carbon monoxide decreased, while emissions of hydrocarbons decreased in most engines, but increased for some. EPA cited results from Projects SE1 and SE2 in its decision to not allow the use of E15 in nonroad engines and other equipment. Specifically, in its October 2010 decision, EPA stated that the results of these projects indicated reasons for concern with the use of E15 in nonroad engines, particularly regarding long-term exhaust and evaporative emissions durability and materials compatibility. Moreover, the agency stated that the application for use of E15 did not provide information to broadly assess the nonroad engine and vehicle sector. EPA concluded that since there are important differences in design between the various types of nonroad engines, and since the agency was not aware of other information that would allow it to fully assess the potential impacts of E15 on the emission performance of nonroad products, it could not allow the use of E15 in these engines. Due to ongoing litigation, we did not evaluate the adequacy of these federally sponsored projects. In November 2010, several trade groups representing the oil and gas sector and the food and livestock industries filed a lawsuit with the U.S. Court of Appeals for the District of Columbia Circuit challenging EPA’s E15 waiver decision. According to the plaintiffs’ statement filed in January 2011, one key issue in the lawsuit is whether EPA acted arbitrarily, capriciously, and in excess of its statutory authority by relying on data that do not provide adequate support for its conclusions, while ignoring extensive data contradicting its position. In addition, in December 2010, several trade groups representing automobile and small-engine manufacturers filed another lawsuit with the U.S. Court of Appeals for the District of Columbia Circuit challenging EPA’s E15 waiver decision. The initial court documents did not provide details on these groups’ rationale for challenging EPA’s waiver decision. In addition to these federally sponsored projects, some nonfederal organizations are conducting research on the effects of intermediate ethanol blends in automobiles. Appendix II provides a description of these organizations and a list of some of their published research. We did not evaluate the results of these studies. The RFS calls for increasing amounts of biofuels to be blended in the nation’s transportation fuel supply, including up to 15 billion gallons of ethanol made from corn starch and potentially billions of gallons of additional ethanol made from cellulosic sources. EPA is responsible for establishing and implementing regulations to ensure that the nation’s transportation fuel supply contains the volumes of biofuels required by the RFS. The agency is also tasked with ensuring that new fuels do not cause or contribute to noncompliance with existing emissions standards when used in automobiles and nonroad products. EPA recently allowed an intermediate ethanol blend, E15, for use in model year 2001 and newer automobiles, after determining that it would not cause these automobiles to be out of compliance with emissions standards. EPA, along with OSHA, is also responsible for ensuring that fuels are compatible and safe for use with infrastructure at fueling locations. However, the effects of intermediate ethanol blends on key components of the nation’s retail fueling infrastructure—such as gaskets and seals in dispensing equipment and UST systems—are not fully understood. A recently published DOE report found that materials commonly used in these gaskets and seals can swell when exposed to certain intermediate ethanol blends, potentially causing leaks. In the case of fuel-dispensing equipment, some newer equipment meets OSHA safety regulations for use with intermediate ethanol blends, as this equipment has been tested and certified by UL for compatibility. Most existing equipment at retail fueling locations in the United States, however, is not approved for use with intermediate blends. Until recently, OSHA had been exploring ways to allow fuel retailers to use existing equipment with intermediate blends while still meeting OSHA’s safety requirements. In light of the recent DOE-sponsored research, OSHA officials are re-evaluating the use of existing equipment with intermediate blends. However, the agency has not clarified when it will make an official decision. Without clarification from OSHA on how its safety regulations on fuel-dispensing equipment should be applied to fuel retailers selling intermediate ethanol blends, the retail fuel industry faces uncertainty in how it can provide such blends to consumers while meeting OSHA safety regulations. In the case of UST systems, fuel retailers can purchase new equipment— certified by UL or the equipment manufacturer for use with intermediate ethanol blends—to meet EPA regulations for compatibility. However, many existing UST systems may not be fully compatible with intermediate blends, and inadequate records may make it difficult for many retailers to verify the compatibility of their UST systems. Due to these concerns, and in light of the recent DOE-sponsored research, EPA is in the process of issuing guidance to clarify how its UST regulations apply to the use of intermediate blends. While DOE is conducting studies on the compatibility of UST materials with intermediate blends, and while EPA plans to conduct a study limited to experts’ views on the subject, EPA officials have acknowledged that additional research, including research on the suitability of specific UST components with intermediate blends, will be needed to facilitate a transition to storing intermediate ethanol blends. Without this effort, the retail fuel industry faces uncertainty in how it can provide intermediate blends to consumers. We are making the following two recommendations: To reduce uncertainty about the applicability of federal safety regulations, we recommend that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to issue guidance clarifying how OSHA’s safety regulations on fuel-dispensing equipment should be applied to fuel retailers selling intermediate ethanol blends. To reduce uncertainty about the potential environmental impacts of storing intermediate ethanol blends at retail fueling locations, we recommend that the Administrator of EPA determine what additional research, such as research on the suitability of specific UST components, is necessary to facilitate a transition to intermediate ethanol blends, and work with other federal agencies to develop a plan to undertake such research. We provided copies of our draft report to EPA, the Department of Labor, DOE, and DOT for comment. In written comments, EPA generally agreed with the information and findings but expressed concern about our recommendation (as worded in the draft report). Specifically, EPA stated that while it believed a targeted approach to conducting additional research will be important to accommodate the move to higher ethanol blends, there will always be uncertainty concerning the compatibility of legacy UST equipment with intermediate ethanol blends given the multitude of factors involved (e.g., the age and prior use of UST equipment, and the number of UST system components). EPA stated that it planned to continue to work with other federal agencies and stakeholders to assist tank owners in safely transitioning to new fuels, and that additional research may be necessary to facilitate that transition. We agree with this characterization of the issue and have revised the draft recommendation to reflect EPA’s suggestions. In addition, in written comments, the Department of Labor concurred with our findings and our recommendation. EPA’s written comments are reprinted in appendix III, and the Department of Labor’s written comments are reprinted in appendix IV. EPA and the Department of Labor also provided technical clarifications, which we incorporated as appropriate. DOE and DOT did not provide formal written comments but provided technical clarifications, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of the report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Administrator of EPA; Secretaries of Energy, Transportation, and Labor; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine the challenges associated with transporting additional volumes of ethanol to wholesale markets to meet Renewable Fuel Standard (RFS) requirements, we interviewed relevant government, industry, academic, and research officials. We also reviewed relevant government reports and studies, industry reports, and academic and research literature. In particular, we asked a nonprobability sample of knowledgeable stakeholders, among other things, to discuss the challenges, if any, associated with transporting additional volumes of ethanol to wholesale markets. We also asked these stakeholders to identify key studies and other knowledgeable stakeholders on this topic. We selected these stakeholders using a “snowball sampling” technique, whereby each stakeholder we interviewed identified additional stakeholders and stakeholder organizations for us to contact. Specifically, based, in part, on our recent work, we first interviewed stakeholders from the Environmental Protection Agency (EPA); the Departments of Agriculture (USDA), Energy (DOE), and Transportation (DOT); the Renewable Fuels Association; the American Petroleum Institute; the Alliance of Automobile Manufacturers; the Association of Oil Pipe Lines; and the Outdoor Power Equipment Institute. We then used feedback from these interviews to identify additional stakeholders to interview. Over the course of our work, we interviewed officials from the following federal agencies: DOE Office of the Biomass Program, DOE Office of Vehicle Technologies Program, DOT Research and Innovative Technology Administration, DOT Pipeline and Hazardous Materials Safety Administration, DOT Federal Railroad Administration, DOT Federal Motor Carrier Safety Administration, DOT Maritime Administration, EPA Office of Research and Development, EPA Office of Solid Waste and Emergency Response, EPA Office of Transportation and Air Quality, USDA Agricultural Research Service, and USDA Economic Research Service. We also interviewed state officials from the Minnesota State Fire Marshal Division and the Office of North Carolina State Fire Marshal. We interviewed industry representatives from the following organizations: the American Petroleum Institute, the Association of American Railroads, the Association of Oil Pipe Lines, Growth Energy, Independent Fuel Terminal Operators Association, Kinder Morgan, the National Petrochemical and Refiners Association, the National Tank Truck Carriers, American Trucking Associations, and the Renewable Fuels Association. We also made several attempts to speak with representatives from an industry association representing barge operators but were not able to schedule an interview during the time frame of our audit. Finally, we interviewed academic and research stakeholders from Carnegie Mellon University, the Energy Policy Research Foundation, the James A. Baker III Institute for Public Policy of Rice University, the Pipeline Research Council International, and TRC Energy Services. During these interviews, knowledgeable stakeholders identified a number of studies related to our work. Of these studies, we identified the following three studies as being directly relevant to our scope of analysis: (1) the National Commission on Energy Policy’s Task Force on Biofuels Infrastructure, (2) EPA’s Renewable Fuel Standard Program (RFS2) Regulatory Impact Analysis, and (3) DOE’s Report to Congress: Dedicated Ethanol Pipeline Feasibility Study. We examined these three studies and determined that they are sufficiently reliable for our purposes based on interviews with contributors to these studies, comparisons of estimates with other sources, and checking selected calculations. To determine the challenges associated with selling intermediate ethanol blends at the retail level, we reviewed relevant presentations, analyses, reports, and other documents from various federal and state agencies, federal research laboratories, and industry associations, including the American Petroleum Institute and the National Association of Convenience Stores. We also selected a nonprobability sample of knowledgeable stakeholders to interview using the same “snowball sampling” technique described for our first objective. In particular, we asked these stakeholders, among other things, to discuss the challenges, if any, associated with selling intermediate ethanol blends at the retail level. We also asked these stakeholders to identify key studies and other knowledgeable stakeholders on this topic. Over the course of our work, we interviewed officials from the following federal laboratories and agencies: DOE National Renewable Energy Laboratory (NREL), DOE Oak Ridge National Laboratory (ORNL), DOE Office of the Biomass Program, DOE Office of Vehicle Technologies Program, EPA Office of Research and Development, EPA Office of Transportation and Air Quality, EPA Office of Underground Storage Tanks, the Department of Labor’s Occupational Safety and Health Administration, the National Institute of Standards and Technology, USDA Agricultural Research Service, and USDA Economic Research Service. We also interviewed state officials from the California Air Resources Board, the Minnesota State Fire Marshal Division, Northeast States for Coordinated Air Use Management, and the Office of North Carolina State Fire Marshal. We interviewed representatives from the following industry associations: Growth Energy, the Renewable Fuels Association, the American Petroleum Institute, the National Association of Convenience Stores, the Society of Independent Gasoline Marketers of America, the National Association of Truck Stop Operators, the Petroleum Marketers Association of America, and the National Petrochemical and Refiners Association. Finally, we interviewed stakeholders from the following research and standards development organizations: ASTM International, Sierra Research, Inc., and Underwriters Laboratories (UL). We also conducted site visits to the research centers responsible for coordinating federal studies on the effects of intermediate ethanol blends on materials and components used in retail fuel storage and dispensing equipment. Specifically, we visited NREL facilities in Golden, Colorado; and ORNL facilities near Knoxville, Tennessee. During these site visits, we interviewed researchers conducting studies on the effects of intermediate ethanol blends on materials and components used in retail fuel-storage and -dispensing equipment. We asked these researchers to discuss available test results and the status of their testing efforts for these studies. We also toured some of the research facilities where testing was being conducted for these studies. To examine research by federal agencies into the effects of intermediate ethanol blends on the nation’s automobiles and nonroad engines, we reviewed relevant presentations, analyses, reports, and other documents from various federal and state agencies; NREL; ORNL; and industry associations, including the American Coalition for Ethanol, the National Marine Manufacturers Association, and the Outdoor Power Equipment Institute. In addition, we reviewed relevant studies and reports from academic groups and private research organizations, including the Coordinating Research Council, Inc., Minnesota State University, Mankato; and the Rochester Institute of Technology. We also selected a nonprobability sample of knowledgeable stakeholders to interview using the same “snowball sampling” technique described for our first objective. In particular, we asked these stakeholders, among other things, to identify research by federal agencies and others into the effects of intermediate ethanol blends on the nation’s automobiles and nonroad engines. Over the course of our work, we interviewed officials from the following federal agencies and laboratories: DOE Office of Vehicle Technologies Program, NREL, ORNL, EPA Office of Research and Development, and EPA Office of Transportation and Air Quality. We also interviewed state officials from the California Air Resources Board and Northeast States for Coordinated Air Use Management. We interviewed representatives from the following industry associations: the American Petroleum Institute, Growth Energy, the Renewable Fuels Association, the Alliance of Automobile Manufacturers, the Association of International Automobile Manufacturers, Inc., the Outdoor Power Equipment Institute, the Engine Manufacturers Association, the National Marine Manufacturers Association, and the International Snowmobile Manufacturers Association. Finally, we interviewed stakeholders from the following academic and research organizations: the Coordinating Research Council, Inc.; the Rochester Institute of Technology; and Minnesota State University, Mankato. We also conducted site visits to the research centers responsible for coordinating federal studies on the effects of intermediate ethanol blends on automobiles and nonroad engines. Specifically, we visited NREL facilities in Golden, Colorado; and ORNL facilities near Knoxville, Tennessee. We also visited a private research facility in Aurora, Colorado, where some of the automobile testing for federal studies has taken place. During these site visits, we interviewed researchers conducting studies on the effects of intermediate ethanol blends on automobiles and nonroad engines. We asked these researchers to discuss available test results and the status of their testing efforts for these studies. We also toured some of the research facilities where testing was being conducted for these studies. Due to ongoing litigation over EPA’s decision to allow ethanol blends with 15 percent ethanol (E15) for use with certain automobiles, we did not evaluate any research by federal agencies and others into the effects of intermediate ethanol blends on automobiles and nonroad engines. We conducted this performance audit from April 2010 to June 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Nonfederal organizations are conducting research on the effects of intermediate ethanol blends in automobiles. For example, in addition to the research the Coordinating Research Council, Inc. (CRC) is conducting, in coordination with DOE and EPA, it has both ongoing and completed research projects on a range of related topics, including evaporative and exhaust emissions for various intermediate ethanol blends. A CRC representative told us that it expects to complete these projects by early 2012. Based on this research, CRC has published 10 reports as of March 2011 (see table 3). Two academic organizations have also conducted research on intermediate ethanol blends in automobiles. Specifically, the Minnesota Center for Automotive Research at Minnesota State University, Mankato, has issued five studies looking at the effects of ethanol blends containing 20 percent ethanol (E20) on fuel system components. These studies received funding from the Minnesota Department of Agriculture and appear on the department’s Web site. In addition, the Center for Integrated Manufacturing Studies at Rochester Institute of Technology in New York has studied the effects of E20 on automobile exhaust, drivability, and maintenance, with funding from DOT. To date, the center has published one report and expects to publish at least two more later in 2011, along with a final summary report to DOT. In addition to the contact named above, Tim Minelli (Assistant Director), Nirmal Chaudhury, Cindy Gilbert, Chad M. Gorman, Jason Holliday, Michael Kendix, Ben Shouse, Barbara Timmerman, and Jack Wang made key contributions to this report.
|
U.S. transportation relies largely on oil for fuel. Biofuels can be an alternative to oil and are produced from renewable sources, like corn. In 2005, Congress created the Renewable Fuel Standard (RFS), which requires transportation fuel to contain 36 billion gallons of biofuels by 2022. The most common U.S. biofuel is ethanol, typically produced from corn in the Midwest, transported by rail, and blended with gasoline as E10 (10 percent ethanol). Use of intermediate blends, such as E15 (15 percent ethanol), would increase the amount of ethanol used in transportation fuel to meet the RFS. The Environmental Protection Agency (EPA) recently allowed E15 for use with certain automobiles. GAO was asked to examine (1) challenges, if any, to transporting additional ethanol to meet the RFS, (2) challenges, if any, to selling intermediate blends, and (3) studies on the effects of intermediate blends in automobiles and nonroad engines. GAO examined government, industry, and academic reports; interviewed Department of Energy (DOE), EPA, and other government and industry officials; and visited research centers. According to government and industry officials, the nation's existing rail, truck, and barge infrastructure should be able to transport an additional 2.4 billion gallons of ethanol to wholesale markets by 2015--enough to meet RFS requirements. Later in the decade, however, a number of challenges and costs are projected for transporting additional volumes of ethanol to wholesale markets to meet peak RFS requirements. According to EPA estimates, if an additional 9.4 billion gallons of ethanol are consumed domestically by 2022, several billion dollars would be needed to upgrade rail, truck, and barge infrastructure to transport ethanol to wholesale markets. GAO identified three key challenges to the retail sale of intermediate blends: (1) Compatibility. Federally sponsored research indicates that intermediate blends may degrade or damage some materials used in existing underground storage tank (UST) systems and dispensing equipment, potentially causing leaks. However, important gaps exist in current research efforts--none of the planned or ongoing studies on UST systems will test actual components and equipment, such as valves and tanks. While EPA officials have stated that additional research will be needed to more fully understand the effects of intermediate blends on UST systems, no such research is currently planned. (2) Cost. Due to concerns over compatibility, new storage and dispensing equipment may be needed to sell intermediate blends at retail outlets. The cost of installing a single-tank UST system compatible with intermediate blends is more than $100,000. In addition, the cost of installing a single compatible fuel dispenser is over $20,000. (3) Liability. Since EPA has only allowed E15 for use in model year 2001 and newer automobiles, many fuel retailers are concerned about potential liability issues if consumers misfuel their older automobiles or nonroad engines with E15. Among other things, EPA has issued a proposed rule on labeling to mitigate misfueling. DOE, EPA, and a nonfederal organization have provided about $51 million in funding for ten studies on the effects of intermediate blends on automobiles and nonroad engines--such as weed trimmers, generators, marine engines, and snowmobiles--including effects on performance, emissions, and durability. Of these studies, five will not be completed until later in 2011. Results from a completed study indicate that such blends reduce a vehicle's fuel economy (i.e., fewer miles per gallon) and may cause older automobiles to experience higher emissions of some pollutants and higher catalyst temperatures. Results from another completed study indicate that such blends may cause some nonroad engines to run at higher temperatures and experience unintentional clutch engagement, which could pose safety hazards. GAO recommends, among other things, that EPA determine what additional research is needed on the effects of intermediate blends on UST systems. EPA agreed with the recommendation after GAO revised it to clarify EPA's planned approach.
|
The intelligence, surveillance, reconnaissance, and strike capabilities provided by unmanned aircraft systems have proven to be a key asset in accomplishing combat missions in the Middle East. DOD is planning to expand unmanned aircraft capabilities to include persistent ground attack, electronic warfare, suppression of enemy air defenses, cargo airlift, and other missions. Unmanned aircraft systems generally consist of (1) multiple aircraft, which can be expendable or recoverable and can carry lethal or nonlethal payloads; (2) a flight control station; (3) information and retrieval or processing stations; and (4) in some cases, wheeled land vehicles that carry launch and recovery platforms. Unmanned aircraft fall into one of three classes: small, tactical, and theater (see table 1). From 2002 through 2008, the total number of unmanned aircraft in DOD’s inventory increased from 167 to over 6,000. Most of the increase has been in small aircraft, with the more complex and expensive tactical and theater-level aircraft increasing from 127 to 521. Four major systems—Global Hawk, Predator, Reaper, and Shadow—have been deployed and used successfully in combat. Given this success, warfighters have demanded more systems and in many cases enhanced capabilities. However, we recently reported that some unmanned aircraft were not designed to meet joint service requirements or interoperability communications standards and, as a result, cannot easily exchange data, even within the same military service. Additionally, certain electromagnetic spectrum frequencies that are required for wireless communications are congested because a large number of unmanned aircraft and other weapons or communications systems use them simultaneously. Furthermore, DOD has been unable to fully optimize the use of its unmanned aircraft in combat operations because it lacks an approach to allocating and tasking them that considers the availability of all assets in determining how best to meet warfighter needs. To manage the increased demand for unmanned aircraft systems and encourage collaboration among the military services, the department has created the Office of Unmanned Warfare, the Unmanned Aircraft Systems Task Force, and other entities. In addition, DOD has published the Unmanned Systems Roadmap (Roadmap) that provides a framework for the future development of unmanned systems and related technologies. The Roadmap states that there is the potential for an unprecedented level of collaboration to meet capability needs and reduce acquisition costs by requiring greater commonality among the military services’ unmanned systems. We have reported that taking an open systems approach and designing systems with common subsystems and components can reduce both production and life cycle costs as well as improve interoperability among systems. For maximum benefit, commonality should be incorporated into the design of a system when requirements are being established. Unmanned aircraft systems can potentially achieve commonality in design and development, ranging from a complete system to a subsystem or component, as well as commonality in production facilities, tooling, and personnel. Despite the proven success of unmanned aircraft on the battlefield and the growing demand for them, these acquisitions continue to incur cost and schedule growth (see fig. 1). The cumulative development cost for the 10 programs we reviewed increased by over $3 billion, or 37 percent, from initial estimates. While 3 of the 10 programs had little or no development cost growth and 1 had a cost reduction, 6 experienced substantial growth ranging from 60 percent to 264 percent. In large part, this cost growth was the result of changes in program requirements and system designs after initiating development. Total procurement funding requirements have grown in the past because of increased quantities; however, many of the programs have also experienced growth in procurement unit costs. Finally, a number of these programs have experienced problems in testing and in performance that required additional development that contributed to cost growth and schedule delays (see app. II for more detailed information about each program). In most cases, development cost growth was the result of beginning system development with unclear or poorly defined requirements, immature technologies, and unstable designs—problems we have frequently found in other major acquisition programs. The Global Hawk program is a good example. In 2001, the Air Force began the Global Hawk program based on knowledge gained from a demonstration program and planned to incrementally integrate more advanced technologies over time. However, within a year, the Air Force fundamentally restructured and accelerated the program to pursue a larger and unproven airframe with multimission capability relying on immature technologies. The final design of the new airframe required more substantial changes than expected. Ultimately, frequent and substantive engineering changes drove development costs up nearly threefold. While BAMS has reported no cost growth, the program is just 1 year into its 7-year development, and the Navy plans to spend over $3 billion in development to modify the airframe—which is the existing Global Hawk airframe—and integrate payloads and other key equipment, modify ground stations, and purchase two developmental and three low rate production aircraft. BAMS program officials told us that they anticipate that the bulk of the development cost will result from modifying the size and shape of the existing radar payload to fit the Global Hawk airframe. Historically, similar weapons development efforts have had difficulty managing risk. Estimated development costs for MP-RTIP decreased 23 percent in large part because of a significant reduction in requirements caused by the termination of another aircraft program for which the radar was being developed. Procurement costs also increased for six of the seven systems that reported procurement cost data, a large portion of which is due to the planned procurement of additional aircraft (see table 2). For example, the Air Force planed to procure an additional 272 Predators, and the Army planed to procure an additional 84 Sky Warriors. As a result, unit costs for the Predator and Sky Warrior decreased by 41 percent and 9 percent, respectively. However, Reaper and Shadow had unit cost growth despite increased quantities. Reaper’s unit costs increased in part because requirements for missiles and a digital electronic engine control were added—which resulted in design changes and increased production costs. Unit cost increases in the Shadow program were largely the result of upgrades to the airframe that were needed to accommodate the size, weight, and power requirements for integrating a congressionally mandated data link onto the aircraft. The Army is also retrofitting fielded systems with capabilities that it had initially deferred, such as a heavy fuel engine. The procurement unit cost for Global Hawk increased the most, in large part because the Air Force not only increased the program’s requirements but also reduced the number of aircraft it intended to purchase. Four programs have also experienced delays in achieving initial operational capability by 1 to almost 4 years (see table 3). In some cases, program delays have been the result of expediting limited capability to the warfighter. For example, the production decision for the Sky Warrior was delayed by 2 years, and the Army now expects to deliver initial operational capability to the warfighter almost 4 years later than originally planned. Similarly, initial operational testing to prove the larger Global Hawk airframe works as intended has been delayed by nearly 3 years. Delays for these two programs and BAMS and Fire Scout average more than 27 months—6 months longer than the average delays we found in our recent assessment of other major weapons acquisitions. In contrast, the Reaper program expects to achieve initial operational capability 4 months earlier than originally planned—in large part because the Air Force expedited aircraft production to meet wartime demands. While Global Hawk, Predator, Reaper, and Shadow have been deployed with notable successes in theater—as well as identified lessons learned— rushing to field these capabilities resulted in a number of performance shortfalls and in some cases ultimately delayed meeting requirements. For example, Predator—the oldest program in our sample—directly transitioned from a successful technology demonstration program into production, skipping the development process entirely. Because little emphasis was placed on testing, performance requirements, and producibility, Predator experienced numerous problems when it was initially produced and deployed, such as unreliable communications and poor targeting accuracy. Given the importance of supporting combat operations, Global Hawk demonstrators and early production aircraft were also quickly placed into operational service. Program officials noted that as a result, the availability of test resources and time for testing were limited, which delayed the operational assessment of the original aircraft model by 3 years. Similarly, in February 2009, the Air Force reported that initial operational testing for the larger more capable Global Hawk aircraft and the program’s production readiness review had schedule breaches. Air Force officials cite the high level of concurrency between development, production, and testing; poor contractor performance; developmental and technical problems; system failures; and bad weather as key reasons for the most recent schedule breach. In a recent letter to the Global Hawk contractor, the Air Force’s Chief Acquisition Executive noted that unless the program’s problems are resolved quickly, the Air Force may have to consider deferring authorization of future production lots, terminating future modernization efforts, and canceling development and production of the aircraft that are planned to carry the MP-RTIP radar. According to program officials, they are currently in the process of developing a plan to address the schedule breaches. Consistent with DOD’s framework for acquiring unmanned systems, several of the tactical and theater-level unmanned aircraft acquisition programs we reviewed have identified areas of commonality to leverage resources and gain efficiencies. However, others share little in common and have missed opportunities to achieve commonality and efficiencies. Even those programs that have achieved some commonality may have additional opportunities to leverage resources. Table 4 compares the levels of commonality for three of our case examples. In assessing options for replacing an aging tactical unmanned aircraft system, the Marine Corps determined that the Army’s Shadow system could meet its requirements for reconnaissance, surveillance, and target acquisition capabilities without any service-unique modifications. An official from DOD’s Office of Unmanned Warfare emphasized that the Marine Corps believed that Shadow represented a “100 percent” solution. The Marine Corps also found that it could use the Army’s ground control station to pilot the Shadow aircraft as well as other Marine Corps unmanned aircraft. A memorandum of agreement was established in July 2007 to articulate how the Marine Corps and the Army would coordinate to acquire Shadow systems. The agreement details the management structure, lines of accountability, and funding arrangements between the two services, and establishes that the Marine Corps will procure systems directly off the Army contract. While formal decisions to pursue common systems were made at the service executive level, Army officials told us that collaboration initiated at the program office level was the primary driver in achieving commonality. By forgoing any service-unique modifications in order to achieve a high level of commonality, the Marine Corps was able to avoid the costs of developing the Shadow. Those costs were borne by the Army and totaled over $180 million. The Marine Corps plans to spend almost $9 million from fiscal years 2009 through 2015 to support development of additional capabilities for the Shadow, to include a backup takeoff mechanism for the automatic takeoff and landing system. Additionally, the Marine Corps and Army are likely to realize some benefits in supporting and maintaining the systems because the components are interchangeable. According to an official at the Navy, the Marine Corps has been able to realize savings or cost avoidance in other areas such as administration, contracting, and testing, although quantitative data on these savings were not available. The Army’s Shadow program office agreed that commonality has allowed the two services to realize economies of scale while meeting each service’s needs. Maintaining a high level of commonality in the Shadow program will require continued commitment from the services and careful management. Specifically, as the Army and Marine Corps explore ways to add additional capabilities to Shadow, the services will need to continue to collaborate to maximize efficiencies. For example, in order to maintain commonality with the Army, the Marine Corps is spending money to add capabilities that meet Shadow requirements established by the Army. Likewise, the Army is interested in adopting capabilities that the Marine Corps developed for Shadow. Army officials told us that the Marine Corps is exploring ways to retrofit the Shadow so that it can carry a weapons payload. They stated that although the Army does not have a requirement for a weapons payload and has no plans to spend money on its development, the Army would be interested in acquiring this capability. The Navy BAMS and Air Force Global Hawk programs have achieved some commonality between their unmanned aircraft systems— specifically, the airframes for the two systems are common. However, the payload and subsystem requirements differ; and while some BAMS ground station requirements are common with those of the Global Hawk, the BAMS contractor noted that the Navy also has some unique requirements. To meet its requirement for persistent maritime intelligence, surveillance, and reconnaissance capability, the Navy awarded the development contract to the Global Hawk contractor, which had proposed a variant of the Global Hawk airframe. Given the commonality between the two airframes, the Navy expects to avoid some development costs and gain efficiencies in production. Since the development contract award, the Navy and Air Force have worked together to identify commonalities to gain additional efficiencies where possible. According to a Navy official, one of the goals of this partnership is for the BAMS program to benefit from lessons learned by the Global Hawk program and thereby avoid the types of problems Global Hawk experienced during development. Officials from the Defense Contract Management Agency emphasized the importance of using these lessons. The BAMS payload and subsystem requirements differ from those of Global Hawk. However, the Navy has identified opportunities to achieve commonality with other aircraft programs rather than developing a service-unique solution. For example, the Navy plans to equip BAMS with the same electro-optical and infrared sensor used on the Air Force’s Reaper unmanned aircraft. In addition, the Navy plans to equip BAMS with a maritime search radar based on radars used on the Air Force’s F-16 and F-22 aircraft. BAMS will also rely on communications equipment that has been fielded on multiple weapon systems. Furthermore, BAMS will use an open systems approach in developing its payloads. A BAMS program official told us that the Navy expects to gain efficiencies in development, operations, training, and manpower. Even with these areas of commonality, the Navy anticipates spending more than $3 billion on development, a substantial portion of which will be used to modify the airframe and ground stations and to integrate payloads, including the radar, to meet Navy-specific needs. According to a program official, the radar technology is mature because it has proven to be functional on fighter aircraft. However, the radar will require modifications to both size and shape before it can be integrated onto the airframe; these modifications are expected to constitute a large portion of the BAMS development cost. Although OSD certified that all BAMS critical technologies were mature at the start of development, OSD officials recently told us that they have some concerns about the radar’s level of maturity. The Navy also plans to upgrade existing Global Hawk ground stations, in large part to allow analysts to view and assess information more quickly. This ground station upgrade will require both hardware and software development. Greater efficiencies may also be possible in production. While production of the first two BAMS aircraft will occur at the same California facility where Global Hawk is currently produced, the remaining BAMS aircraft are expected to be produced at another facility in Florida. We believe that this approach has the potential to create duplication in production by having two facilities staffed and equipped to conduct essentially the same work. However, contractor officials point out that while the California facility has the capacity to accommodate BAMS production, having two separate facilities would minimize the impact of potential work surges. They also note that using the California facility for initial BAMS production will give them time to gain knowledge that could help get the Florida facility up and running. Yet neither contractor nor Navy officials provided an analysis to justify using the Florida facility. According to an official with the BAMS program office, the Navy considers this a contractor business decision, and according to contractor officials, the official analysis will not be done for several years. In the meantime, it is unclear whether the benefits of a second production facility outweigh the costs—such as additional tooling and personnel. OSD’s efforts to consolidate and achieve greater commonality between the Army Sky Warrior and the Air Force Predator have generally not been successful. In 2001 the Army began defining requirements for a replacement to the aging Hunter unmanned aircraft system. According to the Army, the limited number of unmanned aircraft in DOD’s inventory and its lack of direct control over these assets drove its decision to pursue the development of Sky Warrior. The aircraft was originally intended to satisfy the Army’s requirement for wide-area, near real-time reconnaissance, surveillance, and target acquisition capability. However, both the Air Force and the Joint Staff responsible for reviewing Sky Warrior’s requirements and acquisition documentation raised concerns about duplicating existing capability—specifically, capability provided by Predator. Nevertheless, the program received approval to forgo an analysis of alternatives that could have determined if existing capabilities would meet its requirements. The Army noted that such an analysis was not needed and not worth the cost and effort. Instead, it conducted a source selection competition and began the Sky Warrior development program, citing battlefield commanders’ urgent need for this capability. In 2005, the Army awarded the Sky Warrior development contract to the same contractor working with the Air Force to develop and produce Predators and Reapers. As a variant of Predator, Sky Warrior is now being assembled in the same facility. In 2006, the Army and Air Force signed a memorandum of understanding to work together to identify complementary requirements for the Sky Warrior and Predator programs. Despite this memorandum, limited progress was made, and in 2007, the Deputy Secretary of Defense directed the two services to combine their respective programs into a single acquisition program. The services subsequently signed a formal memorandum of agreement. However, the services have maintained separate program offices and funding for their respective programs and the two aircraft still have little in common. Sky Warrior is larger, longer, and heavier; has a wider wing span; and has significantly more payload capacity than Predator. The Air Force is also acquiring the Reaper—formerly Predator B—which is even larger and more capable than both the Sky Warrior and Predator. However, all three systems have similar missions to seek, target, and attack enemy forces. Although the ground control station the Army is developing for Sky Warrior is expected to be used to control other Army unmanned aircraft, it will not be common with the Predator and Reaper ground control station used by the Air Force. According to Army officials, however, they are currently using legacy ground control stations that are essentially the same as the Air Force’s. The Army officials further noted that the Sky Warrior systems that the Army plans to deploy this summer will each be deployed with both an Army-unique ground control station and a legacy ground control station, to provide backup takeoff and landing capability in case the automatic takeoff and landing technology on Sky Warrior encounters problems. The Army and Air Force are pursuing service-specific payloads and subsystems for these aircraft. For example, the services are pursuing separate solutions to meet similar requirements for a signals intelligence capability. Specifically, the Army is developing a unique signals intelligence payload for Sky Warrior, while the Air Force is developing the Airborne Signals Intelligence Payload for Predator, Reaper, and Global Hawk. Further, the Army is developing its own electro-optical and infrared sensor for Sky Warrior—and potentially other Army aviation platforms—and awarded an $11 million sensor development contract to the same contractor producing the Predator’s electro-optical and infrared sensor. While several of the unmanned aircraft programs we examined have achieved commonality at the airframe level, factors such as service-driven acquisition processes and ineffective collaboration have resulted in service-unique subsystems, payloads, and ground control stations. Despite DOD’s efforts to emphasize a more joint approach to identifying and prioritizing warfighting needs and to encourage commonality among the programs, the services continue to drive requirements and make independent resource allocation decisions on their respective platforms. DOD officials have not quantified the potential costs or benefits of pursuing various alternatives, including systems with commonalities. With some notable exceptions, the services have been reluctant to collaborate and efforts to do so have produced mixed results. However, to maximize acquisition resources and meet increased demand, Congress and DOD have increasingly pushed for more commonality among unmanned aircraft systems. In 2003, DOD implemented a new requirements generation system intended to identify warfighter needs from a joint, departmentwide perspective—not from an individual service or program perspective. This process, referred to as the Joint Capabilities and Integration Development System (JCIDS), provides a framework for reviewing and validating capability needs. However, as we reported in 2008, requirements continue to be driven primarily by the individual services with little involvement from the combatant commands, which are largely responsible for planning and carrying out military operations. In reviewing JCIDS documentation related to new capability proposals, we found that most—nearly 70 percent—were sponsored by the military services with little involvement from the joint community. By continuing to rely on capability needs defined primarily by the services individually, DOD may be losing opportunities to improve joint warfighting capabilities and to reduce duplication of capabilities. In a separate report issued that same year, we also noted that DOD did not have key management tools needed to ensure that its intelligence, surveillance, and reconnaissance investments reflected enterprisewide priorities and strategic goals. We further noted that DOD lacked assurance that its investments in intelligence, surveillance, and reconnaissance capabilities—including those in unmanned aircraft—were providing solutions that best minimize inefficiency and redundancy. For the unmanned aircraft systems we reviewed, the services established requirements that were often so specific that they demanded service- unique solutions—thereby precluding opportunities for commonality. Yet none of the programs were able to provide us quantitative analyses to justify pursuing their unique solutions or to show why common solutions would not work. In some cases, service-unique requirements appear to be necessary. For example, the Navy requires BAMS for maritime missions, which are distinct from the land missions of its counterpart, Global Hawk. Specifically, radar functionality depends on the operational environment— that is, water, a moving surface, compared to land, a relatively static surface. Distinct radar capabilities are required to create images of sufficient quality to recognize a target in these unique environments. The Navy is also modifying the Global Hawk design to accommodate BAMS’s altitude agility requirements. Unlike Global Hawk, which is designed to fly continuously at high altitudes, BAMS is intended to fly at low, medium, and high altitudes during a mission. Consequently, the wings on the airframe need to be structurally reinforced to handle the loads and wind gusts associated with frequent changes in altitude. Altitude changes also make BAMS more susceptible to icing conditions, and therefore require a de-icing capability for the wing, tail, and engine. Such differences in requirements have limited commonality in BAMS and Global Hawk beyond the basic airframe. While some of the differences between Global Hawk and BAMS requirements appear to be necessary, an OSD official we spoke with noted that there is concern that other distinctions in requirements that the services cited for other systems could lead to duplication and inefficiencies. For example, an Army official cited the need to develop an electro-optical and infrared sensor for the Army’s Sky Warrior that had unique capabilities from the sensor the Air Force uses on the Predator. The Army noted that it does not need specific sensor capabilities that the Air Force is pursuing, such as high-definition video, which could require costly upgrades to existing Army systems. Currently, however, Predator’s sensor does not use high-definition video and thus could be employed by the Sky Warrior system. Concerned that the government is paying a premium to build two separate sensors with essentially the same capability—the two systems are 80 percent common—OSD directed the services to evaluate the feasibility of and potential savings associated with purchasing a common sensor. An Army official, however, pointed out that the Army had negotiated a unit cost for its version of the sensor that is nearly $450,000 cheaper than unit cost of the Air Force sensor. Similarly, Army and Air Force officials cited the need for unique flight control requirements for Sky Warrior and Predator—“point and click” versus “stick and rudder”—because the Army uses enlisted operators to fly the aircraft, whereas the Air Force uses actual trained pilots. These different approaches require the services to develop and acquire unique ground control stations as well as other capabilities, such as automatic takeoff and landing capability, that have not been used before, resulting in additional cost and schedule risk. In some cases, the services collaborated to identify common configuration, performance, and support requirements, but ultimately did not maximize efficiencies. For example, the Army and Navy have different data link requirements for their respective variants of Fire Scout, primarily because of the Army’s requirement for its Fire Scout to operate within the Future Combat Systems network. However, the Future Combat Systems has been beset with problems and delays—which may not be resolved until 2015—and as a result, there are eight manufactured Fire Scouts sitting in storage that according to the Fire Scout contractor, could be equipped with the same data link as the Navy Fire Scout and the Army’s Shadow and Sky Warrior systems. Though the services could not agree on a common data link, the Army and Navy settled on common Fire Scout requirements for the air vehicle, engine, radar, navigation, and some core avionics subsystem requirements. The services also agreed to use one contract to procure the airframe. The majority of needs that the military services identify are validated and approved without accounting for the necessary resources to achieve desired capabilities. The funding of proposed programs takes place through a process called the Planning, Programming, Budgeting, and Execution system, which is not synchronized with JCIDS but is similarly service-driven. Within the funding system each service has the responsibility and authority to prioritize its own budget, which allows it to make independent funding decisions supporting unique requirements. Therefore, once a service concludes that a unique solution is warranted, the service has the authority to budget for that unique solution, to the exclusion of other possible solutions that could achieve greater commonality and efficiency among the services. While DOD collectively reviews the individual service budgets, this review does not occur until the end of the funding process, at which point it is difficult and disruptive to make changes, such as terminating programs. For example, OSD has directed the Army and the Air Force to merge their respective Sky Warrior and Predator programs. However, the services have concluded that continuing separate programs is warranted to meet their individual service needs. According to Air Force officials, the Air Force does not have a requirement for Sky Warrior, and it is not clear if the system would meet the service’s current operational needs. Therefore, the Air Force has moved forward with its plan to end Predator procurement entirely and transition to an all Reaper fleet. DOD officials noted that the Air Force’s future year budget plans have accordingly eliminated funding for Predator and increased the Reaper budget. OSD was concerned about the implications of this plan from a requirements and acquisition standpoint. Nevertheless, the Air Force will continue to procure its unique Reaper system and the Army will proceed with the development and production of its unique Sky Warrior system. Seven of the 10 programs we reviewed have established memorandums of agreement to foster collaboration and drive the programs toward commonality. However, these agreements generally lacked rigor and did not specify areas of commonality to be pursued. Therefore, it is unclear to what extent these agreements have helped programs leverage resources, particularly considering that little commonality has been achieved. The agreements often included caveats that allowed the services to deviate from the agreement if they determined that service-unique requirements had to be met. In some cases, the agreement was so explicit about service- unique needs and requirements that there was little incentive to pursue common solutions. In contrast, the memorandum of agreement between the Army and the Marine Corps for the Shadow program has specific statements that highlight their intention to meet both services’ requirements. For example, the memorandum states that the two services would procure a fully common aircraft off the same contract, assume the same requirements, and use the same documentation. At the department level, OSD established the Unmanned Aircraft Systems Task Force and the Office of Unmanned Warfare primarily to facilitate collaboration and encourage greater commonality among unmanned aircraft programs. While the two groups act as advisors and have implemented OSD’s recommendations regarding areas where further commonality might be achieved—most prominently, for the Sky Warrior and Predator programs—key officials from these groups emphasized to us that they do not have direct decision-making or resource allocation authority. OSD has repeatedly directed the services to collaborate on these two programs, and in recent memos has clearly expressed disapproval with the services’ amount and pace of progress in doing so. Despite this direction, the services have continued to pursue unique systems. In response to OSD’s most recent direction to merge their service-unique signals intelligence payload efforts into a single acquisition program, the Army and Air Force concluded that continuing their separate programs was warranted, and recommended that OSD direct an objective, independent organization—such as a federally funded research and development center—to conduct a business case analysis to assess the impact of merging the two programs. Table 5 summarizes OSD’s directions and the services’ responses over the past few years. In section 144 of the National Defense Authorization Act for Fiscal Year 2009, Congress directed “he Secretary of Defense, in consultation with the Chairman of the Joint Chiefs of Staff, establish a policy and an acquisition strategy for intelligence, surveillance, and reconnaissance payloads and ground stations for manned and unmanned aerial vehicle systems. The policy and acquisition strategy shall be applicable throughout the Department of Defense and shall achieve integrated research, development, test, and evaluation, and procurement commonality.” The Act further identifies the objectives that Congress expects the policy and acquisition strategy to achieve. Those objectives include, among others, the procurement of common payloads by vehicle class, achieving commonality of ground system architecture by vehicle class, common management of vehicle and payload procurements, ground station interoperability standardization, and common standards for exchanging data and metadata. Finally, DOD was directed to deliver a report containing the policy and acquisition strategy to Congress no later than 120 days after the enactment of the authorization act, which occurred on October 14, 2008. However, as of May 15, 2009, OSD had not issued the report. An OSD official within the Office of Unmanned Warfare told us that the department had requested an extension on the report. In an acquisition decision memorandum issued on February 11, 2009, the Under Secretary of Defense for Acquisition, Technology and Logistics (AT&L) identified the opportunity to adopt a common unmanned aircraft ground control station architecture that supports future capability upgrades through an open system and modular design. The memo notes that adopting a common DOD architecture using a core open architecture model would provide a forum for competition among companies to provide new capabilities. It also states that the military services can be given flexibility to adjust the man-to-machine interfaces for their respective ground control stations while still maintaining commonality on the underlying architecture and computing hardware. In addition, the memo identifies an opportunity to implement common technologies, such as autonomous takeoff and landing, across the military services. The military services are directed to work together—and with OSD in one instance—to assess various aspects of ground control station and technology commonality, and to report their findings to OSD. As of May 15, 2009, the services had not yet reported their findings. Similar to OSD’s approach to ground control stations, the Air Force Unmanned Aircraft Systems Task Force—which is currently developing a long-term unmanned aircraft plan—expects future unmanned aircraft to be developed as open, modular systems to which new capabilities can be added instead of developing entirely new systems each time a new capability is needed. It anticipates that this open systems approach will allow the Air Force to hold competitions for new payloads that can simply be plugged into the aircraft—or “plug-and-play” payloads. In addition, the Air Force recognizes the need for more joint unmanned aircraft solutions and increased teaming among programs and services. A leading task force official told us that given the limited resources DOD has to work with, it is imperative that the services explore more joint solutions and work together and find commonality—which the official noted must begin in the requirements process. He also noted that DOD should be focused on providing incremental capabilities to the warfighter and upgrading them later as the need arises and the technology matures. He pointed out that for most missions the warfighters do not need an optimal system—a 100 percent solution—they usually only need one or two of the functions the system can provide. DOD is challenged to meet the warfighter’s ever-increasing demand for unmanned aircraft systems within available resources. Many of DOD’s tactical and theater-level unmanned aircraft acquisition programs have experienced significant cost growth, schedule delays, and performance shortfalls. DOD recognizes that to more effectively leverage its acquisition resources it must achieve greater commonality among the military services’ various unmanned system programs. While the Army and the Marine Corps achieved a high level of commonality in the Shadow program, other programs had less success. In general, the military services continue to establish unique requirements and prioritize resources without fully considering opportunities to achieve greater efficiencies. As a result, commonality has largely been limited to system airframes and in most cases has not been achieved among payloads, subsystems, or ground control stations. An objective, independent examination of DOD’s current unmanned aircraft portfolio and the methods for acquiring future unmanned aircraft could go a long way to ensuring that DOD gets a better return on every dollar it invests in unmanned aircraft. To more effectively leverage resources and increase the efficiency in unmanned aircraft system acquisition programs, we recommend that the Secretary of Defense take the following two actions: Direct a rigorous and comprehensive analysis of the requirements for current unmanned aircraft programs, develop a strategy for making systems and subsystems among those programs more common, and report the findings of this analysis to Congress. At a minimum, this analysis should quantify the costs and benefits of alternative approaches, identify specific actions that need to be taken, and summarize the status of DOD’s various ongoing unmanned aircraft- related studies. Prior to initiating any new unmanned aircraft program, require the military services to identify and document in their acquisition plans and strategies specific areas where commonality can be achieved, take an open systems approach to product development, conduct a quantitative analysis that examines the costs and benefits of various levels of commonality, and establish a collaborative approach and management framework to periodically assess and effectively manage commonality. In written comments on a draft of this report DOD partially agreed with the first recommendation and agreed with all elements of the second. DOD’s comments are reprinted in appendix III. Regarding our first recommendation to conduct a comprehensive analysis of requirements and opportunities for commonality among current unmanned aircraft systems, the department agreed that there is significant cost benefit to leveraging commonality, but noted that the Unmanned Aircraft Systems Task Force had conducted such analyses. Therefore, the department did not agree that a separate comprehensive analysis across all unmanned systems with the specific purpose of identifying opportunities for commonality was needed. Going forward, we believe that the department could benefit from a more comprehensive, quantitative analysis that looks across unmanned aircraft systems and focuses on subsystems, payloads, and ground control stations as well as airframes. The analyses DOD has done to date have been done on a case- by-case basis, and have primarily resulted in airframe commonality. DOD agreed with each element of our second recommendation related to specific actions that the military services should be required to take before initiating new unmanned aircraft programs. The department believes that current requirements and acquisition policies and processes—some of which were recently revised—satisfy the intent of our recommendation. To ensure that resources are effectively leveraged to gain efficiencies, DOD must ensure the consistent and disciplined implementation of these policies and processes. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, the Secretary of the Air Force, the Secretary of the Navy, the Commandant of the Marine Corps, and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. A list of key contributors to this report can be found in appendix IV. This report examines the Department of Defense’s (DOD) development and acquisition of unmanned aircraft systems. The primary focus of this work is to identify practices and policies that lead to successful collaborative efforts to field unmanned aircraft systems to the warfighter at the right time and for the right price. Specifically, our objectives were to (1) assess the cost, schedule, and performance progress of selected tactical and theater-level unmanned aircraft acquisition programs; (2) examine the extent to which the military services are collaborating and identifying commonality among those programs; and (3) identify the key factors influencing the effectiveness of their collaboration. We selected 10 programs to include in our review: eight unmanned aircraft programs and two payload development programs. The eight unmanned aircraft programs included in our review—Global Hawk, Reaper, Shadow, Predator, Sky Warrior, Fire Scout, Broad Area Maritime Surveillance (BAMS), and Unmanned Combat Aircraft System (UCAS)—make up more than 80 percent of DOD’s planned investment in unmanned aircraft systems from 2008 through 2013. The two payloads—Multi-Platform Radar Technology Insertion Program (MP-RTIP) and Airborne Signals Intelligence Payload (ASIP)—are being developed for use on unmanned aircraft. To assess the extent to which selected tactical and theater-level unmanned aircraft systems are meeting their cost, schedule, and performance targets, we compared current data to baseline cost, schedule, and performance data for the 10 programs in our review. We collected and reviewed data from acquisition program baselines, acquisition decision memorandums, selected acquisition reports, presidential budget documents, and technology and operational assessments. We worked with knowledgeable GAO staff to ensure the use of current, accurate data and incorporated information, where applicable, from our recent assessment of major weapon programs. To examine the extent to which the military services are collaborating and identifying commonality among those programs, we reviewed key documents such as acquisition decision memorandums and policy directives, as well as program acquisition strategies and program briefings. We examined the acquisition approaches of the 10 programs included in our review to identify any collaborative efforts taken among programs. We also reviewed relevant DOD and Joint Staff policies and guidance to identify established criteria for effective collaboration. As part of our analysis, we compared and contrasted requirements for the systems in our review in order to assess areas of potential or apparent similarity as possible opportunities for collaboration. We did not assess the validity of the military services’ requirements for the selected unmanned aircraft programs in our review. To identify and assess which factors influenced the effectiveness of collaboration among the selected programs in our review, we examined the roles and responsibilities of DOD and military service acquisition and requirements organizations in fostering collaboration among programs. We examined the impact that officials and organizations within the acquisition and requirements communities have on collaboration. We also reviewed recent DOD acquisition initiatives, such as portfolio management and configuration steering boards, as well as service-level plans and activities related to collaboration and commonality among unmanned aircraft programs. In performing our work, we obtained information and interviewed unmanned aircraft systems program officials from Wright-Patterson Air Force Base, Ohio; Hanscom Air Force Base, Massachusetts; Redstone Arsenal, Alabama; and Patuxent River, Maryland, and officials from the Air Force, Army, and Navy acquisition and requirements organizations, the Office of the Secretary of Defense, and Joint Chiefs of Staff offices, Washington, D.C. Further, we interviewed officials from the UAS Joint Center of Excellence, Nellis Air Force Base, Nevada; the Air Force UAS Task Force, Washington, D.C.; and U.S. Central Command, MacDill Air Force Base, Florida. We also met with officials from defense contractors General Atomics in Rancho Bernardo, California, and Northrop Grumman, in San Diego and Palmdale, California, to obtain information on the development and production efforts of seven of the eight unmanned aircraft system programs in our review. We conducted this performance audit from August 2008 to July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional information about the eight unmanned aircraft and two payload programs assessed in the body of this report. Each program summary in this appendix includes an aircraft photo, a brief description of the system’s mission and program status and our observations on program execution and outcomes, and where applicable, the summaries also highlight recent GAO work. To provide additional insights into the magnitude of recent and expected future investments in these programs, the summaries include details on DOD’s planned investment from 2008 through 2013 as contained in the department’s fiscal year 2009 budget. The budget information in tables 8 through 17 is expressed in then year dollars, and due to rounding the numbers may not add exactly. The fiscal year 2008 funding shown in the tables has been appropriated by Congress. The funding requested in DOD’s fiscal year 2010 budget submission for each program is captured in notes to the tables— DOD’s fiscal year 2010 budget did not contain any funding projections beyond 2010. Tables 6 and 7 detail many key characteristics and compare the capabilities of the systems discussed in this appendix. The Global Hawk system is a high-altitude, long-endurance unmanned aircraft with an integrated sensor suite and ground segment that provides intelligence, surveillance, and reconnaissance (ISR) capabilities. The system is intended to provide high-resolution, high-quality, digital synthetic aperture radar to include ground moving target indicator, plus electro-optical and infrared imagery of targets and other critical areas of interest. A signals intelligence payload and advanced radar are also being developed. Global Hawk is being developed and procured in four configurations. Block 10 aircraft, designated RQ-4A, are based on an airframe similar to the original demonstrators and employ imaging intelligence sensors (synthetic aperture radar, electro-optical, and infrared). The other three configurations are larger and more capable systems, designated RQ-4B. Block 20 aircraft employ enhanced imaging intelligence sensors. Block 30 aircraft provide multiple intelligence capabilities—signals intelligence as well as the enhanced imaging intelligence sensors. Block 40 aircraft will employ an advanced radar being developed by the Multi-Platform Radar Technology Insertion Program. According to the original contract, the contractor was expected to deliver 19 Global Hawk systems by the end of December 2008. However, as of January 2009, the contractor had only delivered 14 systems, 9 of which were more than 8 months late. All seven Block 10 aircraft have been delivered to the Air Force and have supported ongoing military operations. All six Block 20 aircraft have also been delivered. The Block 20 aircraft are currently in testing and recently underwent an operational assessment. DOD’s top acquisition official noted that the assessment provided useful insight into the performance of the enhanced integrated sensor suite. Block 20 initial operational test and evaluation (IOT&E) was expected to be completed in October 2009. However, the Air Force reported in February 2009 that operational testing had slipped beyond the acquisition program baseline threshold date, but did not specify the expected length of the delay. According to program officials a high level of concurrency in the program—concurrent development, production, and testing—coupled with developmental testing delays, unforeseen system failures, and excessive weather-related flight test cancellations were to blame for the schedule slip. The Air Force received the first of a planned purchase of 26 Block 30 aircraft from the contractor in November 2007—10 months later than the original contract delivery date. The aircraft was subsequently equipped with an Airborne Signals Intelligence Payload (ASIP) sensor, and began developmental flight testing in September 2008. While ASIP developmental testing on the Global Hawk has gone relatively well, Block 30 IOT&E has been delayed in concert with Block 20. According to the contractor, the critical design review for Block 40 has been completed, and the first Block 40 aircraft is in the final stages of assembly. Contractor officials also noted that the MP-RTIP sensor will be integrated onto Global Hawk and begin testing in May 2009. Block 40 operational testing—which was originally expected to begin no later than November 2010—has been delayed, and no new date has been established. The Air Force currently plans to purchase a total of 15 Block 40 Global Hawks. Global Hawk concurrently entered development and limited production of the RQ-4A in March 2001, after completing a successful demonstration program. One year later, the Air Force chose to pursue the larger, more capable RQ-4B airframe. Although the two airframes were expected to have substantial commonality, differences were much more extensive than anticipated. The final design of the RQ-4B required more substantial changes than expected to the fuselage, tail, and landing gear. Frequent and substantive engineering changes during the first year of production increased development and airframe costs and delayed delivery and testing schedules. The system unit cost has more than doubled since development began, and the program has been restructured three times. Completion of Block 20 operational testing has been delayed more than 3 years from initial estimates. Developmental test results indicate that the Block 20 aircraft’s enhanced sensors did not achieve the desired level of clarity. However, DOD’s top acquisition official in an October 2008 acquisition decision memorandum directed the Air Force to go ahead with the procurement of the Block 20 sensors—noting that the sensor performance requirement was a subjective measure and current performance was satisfactory. The Air Force expects to have purchased more than 60 percent of total Global Hawk quantities before Block 20 testing is complete. In October 2008, 1 month after beginning Block 30 ASIP testing, the Office of the Secretary of Defense (OSD) issued an acquisition decision memorandum stating that the ASIP development appeared to be on track to meet user requirements and approving the purchase of a limited number of sensors—pending successful completion of the sensor calibration. According to ASIP program officials, sensor calibration and developmental testing are finished. They also noted that they were planning to conduct dedicated ASIP operational testing on the U-2, which they believe will further reduce risk in the program before beginning Global Hawk Block 30 operational testing, which has been delayed indefinitely in concert with Block 20 operational testing. According to a recent Director of Operational Test and Evaluation (DOT&E) report, the Air Force’s plan to complete Block 40 development in 2010 is in jeopardy because development of the advanced MP-RTIP radar has experienced delays. The report cites a failure to design useful sensor calibration and poor system software stability as the primary culprits. In addition, the DOT&E notes that the potential exists for the contractor to deliver up to 6 of the 15 planned Block 40 systems before MP-RTIP will be able to deliver any operational capability. The Air Force’s 2009 budget request contained over $5 billion for Global Hawk development and procurement. The Global Hawk procurement budget includes funding to purchase and integrate the ASIP and MP-RTIP payloads. ASIP and MP-RTIP development are funded separately. The Air Force's MQ-9 Reaper is a multirole, medium-to-high-altitude, long- endurance unmanned aerial vehicle system capable of flying at faster speeds and higher altitudes than its smaller predecessor, the MQ-1 Predator. While Predator is primarily a surveillance and reconnaissance asset, Reaper is designed for armed reconnaissance missions. It is expected to provide around-the-clock capability to detect, attack, and destroy mobile, high-value, time-sensitive targets. Reaper will carry missiles, laser-guided bombs, and the Joint Direct Attack Munition. Reaper also will support net-centric military operations. Each system consists of four aircraft, a ground control station, and a satellite communications suite. Because of recent budget increases, the Reaper program may soon be designated a major defense acquisition program. Based on current projections, Reaper will achieve initial operational capability in August 2009. Its full-rate production decision was recently postponed over a year, pending the decision about its acquisition category. It recently completed initial operational testing, receiving a rating of partially mission capable. The Air Force has taken delivery of 27 aircraft to date. Total aircraft quantity requirements have increased from 63 to 118, and may increase even further since the Air Force plans to increase procurement in its upcoming budget submission. Reaper’s second increment, comprising the small diameter bomb and automatic takeoff and landing capability, is scheduled to begin development in late fiscal year 2010. The Reaper program began in January 2002 in the aftermath of the September 11, 2001, terrorist attacks. Since inception, Reaper—designated an urgent operational need—has followed a nontraditional acquisition path, resulting in concurrent development and production and increased risk. Shortly after development began, the user required accelerated aircraft deliveries to achieve an interim combat capability. Two years later, the user required additional aircraft for an even more robust early fielding capability. In response to user demands, the Air Force has contracted for over 30 percent of the total quantity before completing initial operational testing. Performance enhancements, such as adding missiles and a digital electronic engine control, increased the weight of the aircraft, requiring stronger landing gear, fuselage, and flight control surfaces. In addition to requirements changes, the aircraft quantity increased 87 percent since the start of development. The increase—from 63 to 118 aircraft—was due in part to demands of the war on terror. The quantity may increase even further because the Air Force plans to curtail future Predator procurement and buy only Reapers. Despite the significant quantity increase, procurement unit costs have not decreased; they have increased about 32 percent since development began. This cost growth is due to inefficiencies associated with the early fielding process and requirements changes. Although initial operational testing was completed in August 2008, two of three key capabilities were not fully assessed. Reaper was effective in destroying targets, but radar problems prevented the test team from completing an assessment of its ability to detect and identify targets. The net-centric operations support capability was not assessed at all. Other areas of concern included operator workload, off-board communications, and system reliability. Because tests were limited by weather, climate, and radar reliability and training, additional testing will be required to assess these capabilities. The Air Force testers gave Reaper a rating of partially mission capable; DOD’s independent test organization has not yet completed its assessment of the test results. Reaper has been funded under the Predator program element since its inception. In its fiscal year 2008 budget, the Air Force began reporting Reaper as a separate program element, thereby isolating program costs. The Shadow 200 unmanned aircraft system provides reconnaissance, surveillance and target acquisition and force protection at the Army brigade level. One Shadow system consists of four air vehicles and associated ground control equipment, including two ground control stations and an air vehicle launcher. Shadow is equipped with automatic takeoff and landing capability and operates at up to 15,000 feet in various weather conditions. The air vehicle has electro-optical/infrared capabilities. Planned system upgrades include integration of the Tactical Common Data Link (TCDL) and the Army’s heavy fuel engine. As a brigade-level asset, the Shadow aircraft is intended to allow for mission payloads to be changed on the aircraft within 30 minutes. The Shadow program is an acquisition category II program and grew out of an advanced concept technology demonstration program. The Shadow program entered full-rate production in 2002 without the TCDL or the heavy fuel engine. Program funding after 2002 has been used for Shadow fleet upgrades, such as integrating the heavy fuel engine. TCDL development is ongoing; retrofitting is scheduled to begin in 2009. According to program officials, 252 Shadow aircraft have been fielded to the Army, with an additional 104 aircraft procured but not yet delivered to the warfighter. The Army plans to procure a total of 460 aircraft. In addition, the Marine Corps signed a memorandum of agreement with the Army in 2007 to acquire 52 Shadow aircraft. The Marine Corps systems are identical to the Army’s, and are being procured through the existing Army contract. Shadow systems were intended to be fielded as quickly as possible with “no bells and whistles,” eventually evolving into more capable systems with the TCDL and heavy fuel engines. According to program officials, initial research and development funding was designated for the basic Shadow system, which program officials estimated to cost $198.1 million. Program officials told us that when Shadow achieved initial operational capability in 2002—effectively ending development—the Army had only spent $181.2 million, or 9 percent less than the initial estimate. Officials stated that development funding since 2002 has been used to upgrade the basic Shadow system. The cost of these upgrades, officials told us, was initially estimated at $99.1 million, but the current estimate has risen to $175.4 million. According to the program office, total research and development costs for the Shadow system have increased 80 percent since program start in 1999, while total procurement costs have increased 267 percent. Program officials stated that increases in the number of aircraft being procured, which has nearly tripled from 164 to 460, along with upgrades and retrofits have contributed to cost growth in the Shadow program. By following an incremental approach for the Shadow program, the Army was able to minimize program risk by delivering basic capability to the warfighter within the initial development cost estimate. To field a more capable, robust system, the program has continued to pursue development of additional capabilities that were not available when the system was initially fielded, such as the TCDL and heavy fuel engine. However, risk remains as the costs for retrofit and upgrade activities have increased. The Marine Corps has benefited from the Army’s development of the Shadow system by avoiding the costs of initial development and purchasing a mature system. However, as the Army upgrades and retrofits Shadow, the Marine Corps will also have to fund these efforts if it wants to maintain the same level of commonality with the Army. Program officials told us that the Marine Corps is exploring ways to add additional capabilities to Shadow aircraft to allow it to carry a weapons payload. Although the Army has no requirement for this capability, the service would be interested in retrofitting Shadow systems with a weapons payload if the capability were developed. Consequently, we believe that the Army and Marine Corps need to carefully manage how they maintain commonality. The Extended Range Multi-Purpose unmanned aircraft system (Sky Warrior) is intended to perform reconnaissance, surveillance, and target acquisition missions at the Army division level. Additionally, Sky Warrior is equipped with four missiles. One Sky Warrior system consists of 12 MQ- 1C air vehicles along with associated ground equipment, including five ground control stations. Operating at 25,000 feet in a near all-weather environment, Sky Warrior will be equipped with automatic takeoff and landing capability. Communications with the Sky Warrior system will be via the TCDL. The air vehicle will be equipped with electro-optical/infrared and synthetic aperture radar capabilities, as well as a signals intelligence payload. The Under Secretary of Defense for Acquisition, Technology and Logistics elevated the Sky Warrior program to an acquisition category I program in a May 2008 memorandum. This directive supported the OSD decision that the Army Sky Warrior and Air Force Predator unmanned aircraft system programs migrate to a single contract for airframe procurement. Currently, Predator is built on an airframe designated the MQ-1B, but OSD is pushing the Air Force to transition its Predators to the same airframe the Army is using for the Sky Warrior, designated the MQ-1C. While the Air Force is planning to procure 5 MQ-1C airframes—in response to recent OSD direction—it is in the process of assessing how many additional airframes, if any, it needs to purchase. According to program officials, the demand for intelligence, surveillance, and reconnaissance capabilities to meet current operational needs has resulted in concurrent development and production of the Sky Warrior system. The Army has purchased 40 interim Sky Warrior air vehicles, 21 of which are built on the existing Predator airframes. Although the remaining 19 air vehicles are built on the new MQ-1C airframe they do not provide full Sky Warrior capability. According to the Army, these interim air vehicles are intended to provide some capability to the warfighter until the full Sky Warrior system is fielded. A February 2009 memorandum from OSD authorizes the Army to procure four production-ready MQ-1C air vehicles to begin initial testing for the full Sky Warrior system. The Sky Warrior program has experienced both cost growth and schedule delays, which, according to program officials, can be explained by the need to deliver systems to the warfighter as quickly as possible. Production quantities have increased from 4 systems at program start to 11 systems; total program costs have increased over 138 percent. Milestone C—the point at which a system is approved to begin production—has been delayed by 2 years; therefore, the full Sky Warrior system will not enter low-rate production until November 2009. Furthermore, because the systems being fielded early do not possess all of the intended capabilities of a full system, costs will likely increase as other capabilities are integrated into the existing systems. Additionally, Sky Warrior has been designated an acquisition category I program and is currently undergoing a program rebaseline. This new baseline, once completed, may incorporate further schedule delays and cost increases. OSD approved the Sky Warrior program’s acquisition strategy in January 2009, despite the fact that the synthetic aperture radar the Army planned to use on the system had proven to be unreliable. The radar’s poor performance forced the Army to select a new radar entirely. However, according to the program office, given the Army’s acquisition strategy, the new radar will not be ready until after Sky Warrior finishes initial operational testing in 2011 and a full-rate production decision has been made. This approach greatly increases the risk in the program. During the most recent GAO annual assessment of DOD major weapon programs, the Sky Warrior program office indicated that all four of the system’s critical technologies were mature. However, a recent independent Army test concluded that three of the four technologies are not yet mature, including the automatic takeoff and landing system and the TCDL. As of May 2009, Army officials recognized that the automatic takeoff and landing system was still an immature technology. As a result, the Army will deploy each of its early Sky Warrior systems this summer with two ground control stations, an Army One Ground Control Station and a legacy ground control station—with stick and rudder controls—as a backup system in case the auto takeoff and land capability fails. Much of our prior work in DOD weapon systems acquisition and commercial best practices has shown that conducting technology development concurrent with product development greatly increases cost, schedule, and performance risks. The Air Force’s MQ-1 Predator is a single-engine, propeller-driven, remotely piloted aircraft designed to operate at medium altitudes for long- endurance missions. The program began in 1994 as an advanced concept technology demonstration, and the aircraft proved its military utility with a successful operational deployment in Bosnia. Its original mission was to provide continuous ISR coverage to the theater commander/joint warfighter. In 2001, the Air Force added weapons to Predator, thus expanding its role to include a limited strike capability. Predator provides full-motion video of the battlefield with high-resolution sensors in near real time. Each Predator system includes four aircraft, a ground control station, and a satellite communications suite. Future procurement of the Predator is uncertain. After more than a decade of operational use, it is now considered a legacy program. The Army is procuring a more modern, capable variant of Predator—Sky Warrior. In 2007, OSD directed the services to merge these two programs, using the newer Army platform as the baseline configuration. However, because of differences in service requirements and operations, the Army and Air Force have made limited progress. For example, the Army needs a tactical capability that operates with existing Army platforms like the Apache helicopter. In contrast, the Air Force needs a strategic capability that satisfies the needs of the joint warfighter. The Air Force has not completed testing of the newer Sky Warrior aircraft, and in fact is planning to purchase another variant—Reaper—as a replacement for Predator. Because Predator transitioned directly from a technology demonstrator into production, it did not follow a typical acquisition process. Its performance and quantity requirements have changed significantly since inception. For example, Predator was initially designed to provide the warfighter continuous ISR and targeting information. The user subsequently required that it carry missiles, giving it a limited strike capability. In addition, Predator’s quantity requirements have more than doubled since it began. The Air Force originally planned to procure 12 systems (48 aircraft), but because of the increasing demand for its capability, the total quantity has been increased to 26 systems. With the addition of MQ-9 Reaper and the Army’s Sky Warrior, the contractor’s business base has significantly expanded. This expansion raised concerns about the contractor’s capacity, particularly given its history of late aircraft deliveries. Last year, however, the contractor delivered 10 Predator aircraft ahead of schedule. According to program officials, these aircraft were completed early to provide time for the contractor to move its equipment into newly expanded facilities. Despite these deliveries, the contractor’s more recent aircraft deliveries have once again been late. Although the Air Force was directed to begin purchasing the newer MQ-1C aircraft—the Sky Warrior configuration—it plans to buy Reapers in lieu of Predators. Given the lingering uncertainty about how many of which configurations will be purchased, program officials are concerned that future aircraft deliveries will be affected. Early Predator cost data are limited. Once Predator became an acquisition program, the Air Force projected an acquisition cost of $910 million (base year 2009 dollars) for 12 systems. Since that time, the number of operational systems has more than doubled, the performance and payload requirements have changed, and the flying hours and attrition rates have increased. This hampers a direct comparison. The total program cost is about $3.61 billion (base year 2009 dollars). The U.S. Navy Vertical Take-off and Landing Tactical Unmanned Air Vehicle (VTUAV) will provide local commanders real-time imagery and data to support ISR requirements. A VTUAV system is composed of up to three air vehicles with associated electro-optical/infrared/laser designator- rangefinder sensors, two ground control stations, one recovery system, and associated spares and support equipment. The air vehicle launches and recovers vertically and operates from ships and land. Interoperability is achieved through use of a common data link and standard communications. VTUAV is being designed as a modular, reconfigurable system to support various operations, including potential surface, antisubmarine, and mine warfare missions. Future capabilities currently under consideration include surface search radar, signal intelligence, enhanced data and communications relay, and integration of weapons. The Navy expects the VTUAV to achieve initial operational capability in late fiscal year 2009. The program began in fiscal year 2000, after market research and a competitive ship-based vertical takeoff and landing demonstration were conducted. A competitive contract was awarded to Northrop Grumman for delivery of system development air vehicles and the first lot of low-rate initial production (LRIP) systems. During fiscal year 2002, the program was de-scoped to a technology demonstration effort, and two LRIP options were not exercised. In fiscal year 2003, the VTUAV program was restructured to support the Littoral Combat Ship (LCS), and received increased funding from Congress in fiscal year 2004 toward that goal. The restructured program was expected to cost about $2.3 billion, and as a result in August 2006 it was designated as an acquisition category IC program. The program received Milestone C approval in May 2007 to procure up to 4 air vehicles in the first lot of LRIP. The Navy plans to procure a total of 168 air vehicles, plus 9 developmental LRIP vehicles. VTUAV is currently undergoing test and evaluation. The VTUAV program was restructured in 2004 to support the LCS. At the time of the restructuring, Congress authorized funding for an upgraded VTUAV variant, the MQ-8B, which addressed requirement shortfalls— including time on station—of an earlier version, the RQ-8A. In February 2008, after being advised of at least a 2-year delay in the LCS program, the Navy decided to continue VTUAV development using an alternate ship—a frigate. Navy officials estimated that the move to the alternate ship would require $42.6 million of additional funding and result in a 9-month schedule delay. VTUAV efforts are funded under cost-type contracts for system development and firm-fixed price for production. The program uses common, mature technologies as much as possible. The air vehicles are based on a commercial manned helicopter that has been in service for over 20 years. The MQ-8B is undergoing developmental and operational testing and has landed successfully aboard ship. The Army, in September 2003, chose VTUAV to meet Future Combat System (FCS) unmanned aerial requirements. According to contractor officials, the two services were able to achieve about 97 percent commonality for the airframe. However, service-specific payloads will hinder further collaboration. Furthermore, FCS delays could affect Fire Scout production efficiency. According to Northrop Grumman officials, they need to produce a minimum number of airframes per year to break even; they are currently producing three airframes per year for the Navy to support system development. If FCS continues to delay the Army portion of the Fire Scout program (or other potential buyers do not make a purchase), airframe production will be difficult to sustain. The XM157 Class IV unmanned aircraft system (UAS) will provide reconnaissance, surveillance, targeting, mine detection, communications relay, wide area surveillance, chemical detection, and meteorological survey capabilities for the FCS Brigade Combat Team (BCT). The Class IV UAS will operate in conjunction with manned aircraft. The air vehicle will vertically take off and land from unprepared surfaces, and will be controlled by light tactical vehicles equipped with launch control units and by command and control manned ground vehicles within the FCS BCT over the FCS network. The Class IV UAS is part of the FCS family of systems made up of integrated, advanced, networked combat and sustainment systems; unmanned ground and air vehicles; and unattended sensors and munitions. Complementary programs external to FCS development provide many of the major Class IV UAS subsystems and payloads, including communications equipment such as the Warfighter Information Network- Tactical (WIN-T) and the Joint Tactical Radio System (JTRS). The Army is pursuing a joint acquisition strategy with the Navy. The Army, in September 2003, chose the Navy Fire Scout for its Class IV UAS. Program officials indicated that the Navy is the lead service for system development. The Army purchases common airframes under a separate line item in the Navy contract, and then provides the airframes to the FCS lead system integrator as government-furnished equipment. The Army is leveraging Navy testing to mitigate risk and provide early test data; its own first developmental flight testing is not scheduled to begin until the second quarter of fiscal year 2011. The Army plans to support 15 BCTs, each equipped with 32 Class IV air vehicles, and will procure 500 air vehicles overall, including 20 for development and low-rate initial production. The Army has taken delivery of eight airframes. The Class IV UAS schedule depends on complementary programs— specifically WIN-T and JTRS—and the overall FCS schedule, which has slipped. The fiscal year 2011 first flight date represents a 42-month delay from the Army’s original baseline estimate. According to contractor officials, the Army and Navy achieved about 97 percent commonality for the airframe. A Defense Contract Management Agency official estimated about $125 million in development cost savings attributable to commonality. Contractor representatives maintain that operations and maintenance would provide greater opportunity for cost savings from commonality. However, the Army’s requirements for FCS-based, mission-specific subsystems and payloads are hindering further collaboration. According to both program and contractor officials, the delivered airframes were intended for testing, but they cannot be tested without WIN-T and JTRS, which are not currently available. WIN-T will be the data link that allows control of the Class IV UAS from mobile ground stations, and JTRS will provide a communication relay capability. Neither program nor contractor officials seemed confident that these subsystems would be available soon. Furthermore, DOD’s recent proposal to terminate the FCS ground segment raises additional uncertainty over the Army’s plans. While the Navy identified an alternate ship to continue Fire Scout development when it learned that the projected host platform was delayed, the Army seems to be holding to FCS standards. Contractor representatives believe the Army is forgoing providing capability to the warfighter as a result. They envision the Class IV UAS not being available to the warfighter until 2015. In their opinion, however, were the Army to install an existing data link and payload into the aircraft, they would be useful, for example, in detecting improvised explosive devices in Iraq or Afghanistan. Because it is a part of the FCS program, the Class IV UAS is funded through the FCS reconnaissance platforms budget, which also includes the Class I UAS. Therefore, we are unable to provide details on the Class IV UAS budget projections using the fiscal year 2009 budget. However, DOD’s fiscal year 2010 budget, released in May 2009, contains an RDT&E budget request of $44.005 million in fiscal year 2010 specifically for the Class IV UAS. The Broad Area Maritime Surveillance (BAMS) unmanned aircraft system will give DOD a unique persistent capability to detect, classify, and identify targets over a wide area of maritime battlespace. Operating both independently and cooperatively with other assets, it will provide a more effective and supportable ISR capability than currently exists. Along with future systems—the P-8A Multi-mission Maritime Aircraft and the EP-X electronic surveillance aircraft—BAMS will be part of a maritime patrol and reconnaissance force family of systems integral to the Navy’s recapitalization of its airborne ISR. The Navy intends to position BAMS mission crews with maritime patrol and reconnaissance personnel to closely coordinate missions and use a common support infrastructure. To meet its objectives, the BAMS program is modifying a version of the Air Force Global Hawk air vehicle. DOD approved the start of system development for BAMS in April 2008, but the source selection was subject to a bid protest that delayed system development to August 2008. The program briefed the Joint Requirements Oversight Council on the source selection results, joint efficiencies being pursued, and potential future synergies in December 2008 and conducted the System Requirements Review in January 2009. The LRIP contract award is planned for fiscal year 2013, and the Navy expects to purchase 70 total aircraft—2 in development, 3 in low-rate production, and 65 in production. BAMS is being developed using the Global Hawk airframe; however, the Navy plans to make upgrades, such as a wing de-icing technology, to accommodate the maritime operations. It also plans to use different subsystems, such as sensors and communications equipment. Program officials explained that the BAMS air vehicle is about 78 percent common with Global Hawk and uses sensor components or entire subsystems from other existing platforms. The BAMS program is leveraging lessons learned from the Global Hawk program to avoid similar cost, testing, and technology problems, and the two programs have established a memorandum of agreement. Northrop Grumman is currently considering whether to assemble BAMS in two locations: Palmdale, California, where the Global Hawk is being assembled, and a new facility in St. Augustine, Florida. Though the Palmdale facility has the capacity to assemble BAMS, contractor officials told us that the decision will be based on both economic and program risk- level assessments. They were not able to provide quantitative analysis associated with their pending decision to assemble BAMS in two locations and told us that the calculations will not be made until the 2011-2012 time frame. In February 2008, before initiating development, DOD and the Navy concluded that all BAMS technologies were approaching maturity—that is, they had been demonstrated in a relevant environment. Therefore, the Navy insists that the BAMS program has no critical technologies. Despite repeated requests, Navy officials did not provide us with the list of technologies that were assessed for maturity. Nevertheless, the program office has identified six subsystems, such as radar software, that could cause cost, schedule, or performance issues during development. Program officials indicated that they are monitoring development risks for these subsystems. The Under Secretary of Defense for Acquisition, Technology and Logistics’ decision allowing the program to begin development also required that an independent technology readiness assessment be conducted at the completion of preliminary design review and that the results be submitted for DOD review. While there is a benefit to using an existing airframe, the Navy plans to make changes to Global Hawk that introduce additional risk to the program. Already the initial operational capability has been delayed from August 2014 to December 2015, but program officials are planning to achieve full operational capability by 2019—in time to avoid a capability gap that otherwise would be created by the retirement of the P-3C Orion aircraft. OSD’s fiscal year 2010 budget, released in May 2009, reflects a fiscal year 2009 RDT&E funding amount of $432.5 million and a fiscal year 2010 RDT&E budget request of $465.8 million for BAMS. The Navy Unmanned Combat Air System Demonstration (UCAS-D) program will demonstrate critical technologies for operating a low observable unmanned aerial system from aircraft carriers. The first capabilities to be proven are launch and recovery and deck surface operations. The demonstration will inform a follow-on acquisition decision at Milestone A or B. In the 2020-2025 time frame, the Navy plans to change program focus to a strike-fighter aircraft possibly to replace F/A-18 aircraft in a future Carrier Air Wings mix with the Joint Strike Fighter. The Navy wants a carrier- based, air-refueled, very long-endurance aircraft capable of operating at greater distances from the carrier battle group, defeating heavily defended targets, expanding payload options, and providing continuous maneuvers. The Navy conducted a limited source selection between two contractors that had been involved in prior UCAS-related efforts, and in August 2007 awarded Northrop Grumman a $635.9 million contract to design, develop, integrate, test, and demonstrate two unmanned combat air systems. The contract includes cost, technical, and schedule incentives. The first flight is planned for November 2009 at Edwards Air Force Base, and the first landing on an aircraft carrier is expected to occur at the end of 2011. Navy UCAS-D is only a demonstration effort; no acquisition program has been approved, and no milestone events have been scheduled. However, the program is trying to mitigate risks through modeling and simulation, surrogate flight testing, and shore-based testing before conducting sea trials. Additionally, the demonstration aircraft will use various systems already in use on other aircraft, such as F-18 landing systems and F-16 engines, according to officials. The program appears generally to be on schedule and within budget. According to program officials, the program has the funding needed to complete the demonstration by fiscal year 2013 as planned, despite a funding reduction of almost $400 million in the 2009 President’s budget. Navy UCAS-D can trace its origin to Defense Advanced Research Projects Agency (DARPA) unmanned combat air vehicle advanced technology demonstration programs started in the late 1990s. In 2003, OSD established a joint Navy and Air Force program, designated the Joint Unmanned Combat Air System (J-UCAS), to be managed by DARPA. In 2005, the joint program transitioned from DARPA to the Air Force. However, a late 2005 program decision memorandum recommended terminating the J-UCAS program and funding separate Navy and Air Force programs. As a result, the Navy initiated the Navy UCAS-D program in 2006. Prior to holding a Milestone B decision, the program is leveraging DARPA’s J-UCAS efforts in conjunction with current risk mitigation efforts, to evolve required technologies to the level at which DOD considers technology to be mature. While risk mitigation is a positive step, and the program seems to be on schedule, significant challenges remain. For example, development of an airborne data network radio that is critical to carrier landing and aerial refueling operations has been suspended indefinitely, according to program officials. While the program is proceeding with an earlier version of the radio, program officials note that the future is uncertain. Furthermore, the Defense Contract Management Agency expressed concern about the UCAS-D program entering system development; the program may use different technologies than are currently being demonstrated, likely resulting in significant additional development costs. In addition to the $1.4 billion in funding detailed in table 1, we noted fiscal year 2007 funding of $97.1 million, and program officials identified $1.3 billion of known funding for either the Navy UCAS-D or DARPA J- UCAS programs before fiscal year 2007—yet acknowledged the amount may not represent total previous funding. DOD, assuming no future cost increases, will have spent at least $2.8 billion for two demonstration aircraft. The Air Force’s Airborne Signals Intelligence Payload (ASIP) is a common, scalable family of sensors designed for medium- and high-altitude aircraft. ASIP is expected to provide the warfighter with automatic, real-time, battlefield surveillance, situational awareness, and intelligence information that may be composed of communications and electronic signals—commonly referred to as signals intelligence (SIGINT). Within the ASIP program, the Air Force is developing three different sensor variants: (1) a baseline variant to be integrated onto the U-2 and unmanned Global Hawk aircraft; (2) a scaled-down variant, designated the ASIP 1C, to be integrated onto the unmanned Predator; and (3) a midsized variant, the ASIP 2C, to be integrated onto Reaper and potentially the Army’s Sky Warrior, which are also unmanned aircraft. The ASIP program office is responsible for developing and testing the sensors, while the individual aircraft program offices will be responsible for sensor production and integration. The ASIP baseline sensor underwent an operational assessment in February 2008. The results of that assessment indicated that the program was on track to meet its effectiveness and suitability requirements. The program completed developmental testing in February 2009 and plans to begin operational testing using a U-2 aircraft in March 2009. Officials noted that the Air Force intends to use the operational testing on the U-2 to assess the baseline sensor’s readiness for initial operational testing on Global Hawk. Depending on the results of the U-2 tests, the Air Force may leave the developmental unit on the U-2 for continuing operational use. Flight testing on Global Hawk began in September 2008, and initial operational testing is scheduled to begin in late fiscal year 2009. However, the Global Hawk program officials recently indicated that the program will not meet its planned starting date for operational testing, and according to the ASIP Program Manager, it will most likely not begin until early 2010. The Global Hawk program office plans to purchase a total of 25 ASIP sensors for its Block 30 aircraft beginning in fiscal year 2009. According to the program office, those ASIP-equipped aircraft will not be fielded, however, until 3 years later, because of sensor production and integration. In October 2008, DOD approved the purchase of 2 sensors and the program will seek approval for an additional 3 sensors in spring 2009, depending upon successful completion of developmental testing. Integration and developmental testing of the ASIP 1C sensor will begin in summer 2009. According to the program office, the total number of ASIP 1C sensors to be produced is critically linked to the Air Force’s Predator purchases and has not yet been finalized. Regardless, ASIP program officials are operating under the assumption that ASIP 1C production will begin in 2010. Air Force officials noted that uncertainties about 1C and 2C production quantities are in large part the result of uncertainties about the number of Predators and Reapers the Air Force will ultimately purchase. In addition, officials stated that if the Air Force purchases the Army’s Sky Warrior airframe to upgrade its Predators, it will have to purchase more 2C sensors and fewer 1Cs. However, according to DOD officials, the Air Force is planning to end Predator procurement and pursue an all-Reaper fleet. ASIP program officials noted that developmental efforts on the 1C sensor will continue regardless of final production decisions because knowledge gained from the 1C sensor is an integral part of the 2C sensor’s development. Because of the modular design of ASIP and the high level of commonality between the three ASIP variants, the program office plans to seek approval to bypass a formal 2C development program and enter directly into production. Under DOD’s direction, all three ASIP sensor efforts have been combined under one major defense acquisition program—recently designated Acquisition Category ID. However, officials stated that the Air Force will continue to manage the program as though it were three separate programs. According to the program office, the ASIP baseline sensor has experienced 110 percent cost growth from its original estimate, primarily because of capability enhancements, schedule impacts, and increased hardware deliveries. Program officials stated that although the baseline sensor’s development is on schedule, the program is affected by fluctuations within Global Hawk. Since Global Hawk’s schedule has continued to slip, ASIP program officials recently sought and received approval to begin ASIP operational testing on the U-2. The program office noted that the Air Force had not originally planned to conduct ASIP operational tests, but given the disconnect between ASIP developmental test completion and the beginning of Global Hawk initial operational testing, officials believe that additional operational testing on the U-2 would allow them to gain knowledge and further reduce risk before beginning Global Hawk testing. In January 2009, DOD directed the Army and the Air Force to analyze ASIP and the Army’s Tactical SIGINT Payload in an effort to move to a common SIGINT sensor. However, in response the services emphasized that after 15 months of collaboration, a joint program does not make sense and recommended that an independent organization conduct the analysis and provide further direction. The Air Force’s Multi-Platform Radar Technology Insertion Program (MP- RTIP) is being designed as a modular, scalable, two-dimensional active electronically scanned array radar. The Global Hawk MP-RTIP variant will provide persistent imaging on a long-endurance platform, with improved ground moving target indicator, limited air moving target indicators, and synthetic aperture radar imaging over current capability. MP-RTIP was originally intended for multiple platforms, including the E- 10A multisensor command and control aircraft, a large variant of the Boeing 767 aircraft. However, the E-10A program was canceled in 2007 and all current development efforts are directed to integrating the radar into the Block 40 configuration of the Air Force Global Hawk unmanned aerial vehicle. The weight and power restrictions of the Global Hawk platform require a smaller radar than the variant designed for the E-10A aircraft. In September 2006, flight testing began after installation of a Global Hawk MP-RTIP development unit on a Proteus, a surrogate test bed aircraft. Proteus flight testing is planned to be complete in March 2009, which is a delay from September 2007. According to program officials, radar antenna calibration issues caused significant delays in maturing software. By June 2009, the MP-RTIP program plans to deliver one MP-RTIP development unit to the Global Hawk program for developmental testing though officials told us that delivery could be delayed further if Global Hawk is not ready to receive the radar at that time. Thereafter, the MP-RTIP program office will support the Global Hawk program through completion of initial operational testing, which is planned to start no later than November 2010. The Air Force currently funds development of the radar through the MP-RTIP program, while production will be funded through the Global Hawk program. Furthermore, officials told us that the Air Force continues to investigate other platforms for the radar. According to program officials, the MP-RTIP program office coordinates with the Global Hawk program office to prepare to integrate the radar on the Global Hawk Block 40 configuration in June 2009. Officials also told us that the two offices coordinated the fit tests for the radar in fall 2008, and continue coordination as they conducted radar system performance-level verification through March 2009. Development costs for MP-RTIP have decreased, largely because of the E- 10A program cancellation, according to officials. In total, these costs have decreased by 23 percent, from $1.7 billion at the program’s December 2003 start to $1.3 billion as of December 2007. In addition to the contact named above, principal contributors to this report were Bruce Fairbairn, Assistant Director; Travis Masters; Rae Ann Sapp; Karen Sloan; Leigh Ann Nally; Raffaele Roffo; Brian Smith; and Laura Jezewski. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Unmanned Aircraft Systems: Additional Actions Needed to Improve Management and Integration of DOD Efforts to Support Warfighter Needs. GAO-09-175. Washington, D.C.: November 14, 2008. Defense Acquisitions: DOD’s Requirements Determination Process Has Not Been Effective in Prioritizing Joint Capabilities. GAO-08-1060. Washington, D.C.: September 25, 2008. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Intelligence, Surveillance, and Reconnaissance: DOD Can Better Assess and Integrate ISR Capabilities and Oversee Development of Future ISR Requirements. GAO-08-374. Washington, D.C.: March 24, 2008. Defense Acquisitions: Greater Synergies Possible for DOD’s Intelligence, Surveillance, and Reconnaissance Systems. GAO-07-578. Washington, D.C.: May 17, 2007. Intelligence, Surveillance, and Reconnaissance: Preliminary Observations on DOD’s Approach to Managing Requirements for New Systems, Existing Assets, and Systems Development. GAO-07-596T. Washington, D.C.: April 19, 2007. Defense Acquisitions: Better Acquisition Strategy Needed for Successful Development of the Army’s Warrior Unmanned Aircraft System. GAO-06-593. Washington, D.C.: May 19, 2006. Unmanned Aircraft Systems: Improved Planning and Acquisition Strategies Can Help Address Operational Challenges. GAO-06-610T. Washington, D.C.: April 6, 2006. Unmanned Aircraft Systems: New DOD Programs Can Learn from Past Efforts to Craft Better and Less Risky Acquisition Strategies. GAO-06-447. Washington, D.C.: March 15, 2006. Unmanned Aircraft Systems: Global Hawk Cost Increase Understated in Nunn-McCurdy Report. GAO-06-222R. Washington, D.C.: December 15, 2005. Unmanned Aircraft Systems: DOD Needs to More Effectively Promote Interoperability and Improve Performance Assessments. GAO-06-49. Washington, D.C.: December 13, 2005. Unmanned Aerial Vehicles: Improved Strategic and Acquisition Planning Can Help Address Emerging Challenges. GAO-05-395T. Washington, D.C.: March 9, 2005. Unmanned Aerial Vehicles: Changes in Global Hawk’s Acquisition Strategy Are Needed to Reduce Program Risks. GAO-05-6. Washington, D.C.: November 5, 2004.
|
From 2008 through 2013, the Department of Defense (DOD) plans to invest over $16 billion to develop and procure additional unmanned aircraft systems. To more effectively leverage its acquisition resources, DOD recognizes that it must achieve greater commonality among the military services' unmanned aircraft programs. Doing so, however, requires certain trade-offs and complex budget, cost, and schedule interactions. GAO was asked to assess the progress of selected unmanned aircraft acquisition programs, examine the extent to which the services are collaborating and identifying commonality among those programs, and identify key factors impacting the effectiveness their collaboration. GAO analyzed cost, schedule, and performance data for eight unmanned aircraft systems--accounting for over 80 percent of DOD's total planned investment in unmanned aircraft systems from 2008 through 2013--and two payload programs. While proving successful on the battlefield, DOD's unmanned aircraft acquisitions continue to incur cost and schedule growth. The cumulative development costs for the 10 programs GAO reviewed increased by over $3.3 billion (37 percent in 2009 dollars) from initial estimates--with nearly $2.7 billion attributed to the Air Force's Global Hawk program. While 3 of the 10 programs had little or no development cost growth and 1 had a cost reduction, 6 programs experienced significant growth ranging from 60 percent to 264 percent. These outcomes are largely the result of changes in program requirements and system designs. Procurement funding requirements have also increased for most programs, primarily because of increases in the number of aircraft being procured, changes in system requirements, and upgrades and retrofits to equip fielded systems with capabilities that had been deferred. Overall, procurement unit costs increased by 12 percent, with unit cost increases of 25 percent or more for 3 aircraft programs. Finally, several programs have experienced significant delays in achieving initial operating capability, ranging from 1 to nearly 4 years. Several of the tactical and theater-level unmanned aircraft acquisition programs GAO reviewed have identified areas of commonality to leverage resources and gain efficiencies. For example, the Marine Corps chose to procure the Army's Shadow system after it determined Shadow could meet its requirements, and was able to avoid the cost of initial system development and quickly deliver capability to the warfighter. Also, the Navy's Broad Area Maritime Surveillance system will use a modified Global Hawk airframe. However, other programs have missed opportunities to achieve commonality and efficiencies. The Army's Sky Warrior--which is a variant of the Air Force's Predator, is being developed by the same contractor, and will provide similar capabilities--was initiated as a separate development program in 2005. Sky Warrior development is now estimated to cost nearly $570 million. DOD officials continue to press for more commonality in the two programs, but the aircraft still have little in common. Although several unmanned aircraft programs have achieved airframe commonality, service-driven acquisition processes and ineffective collaboration are key factors that have inhibited commonality among subsystems, payloads, and ground control stations. For example, the Army chose to develop a new sensor payload for its Sky Warrior, despite the fact that the sensor currently used on the Air Force's Predator is comparable and manufactured by the same contractor. To support their respective requirements, the services also make resource allocation decisions independently. DOD officials have not quantified the potential costs or benefits of pursuing various alternatives, including common systems. To maximize acquisition resources and meet increased demand, Congress and DOD have increasingly pushed for more commonality among unmanned aircraft systems.
|
By 1986, recruit quality was at historically high levels. All services had met or exceeded their overall enlistment objectives for percentages of recruits who held high school diplomas and scored in the top categories on the test taken to qualify for military service. Specifically, the percentage of recruits with high school diplomas increased from 72 percent during the 1964-73 draft period to 92 percent in 1986. Also, 64 percent of new recruits in 1986 scored in the upper 50th percentile of the Armed Forces Qualification Test, up from 38 percent in 1980. The services’ success in recruiting high quality enlistees continued through the 1980s and into the 1990s, with the percentage of high school graduates reaching a high of 99 percent in 1992 and the percentage of those scoring in the upper half of the Armed Forces Qualification Test peaking in 1991 at 75 percent. Studies of attrition have consistently shown that persons with high school diplomas and Armed Forces Qualification Test scores in the upper 50th percentile have lower first-term attrition rates. For example, for those who entered the services in fiscal year 1992 and had high school diplomas, the attrition rate was 33.1 percent. For persons with 3 or 4 years of high school and no diploma, the rate was 38.9 percent; and for those with General Education Development certificates, the attrition rate was 46.3 percent. Similarly, those who scored in the highest category, category I, of the Armed Forces Qualification Test had an attrition rate of 24.7 percent, and those in category IVA had a rate of 40.7 percent. Increases in the quality of DOD’s recruits since the 1970s, coupled with the lower attrition rates of those considered “high quality” recruits, logically should have resulted in lower first-term attrition rates throughout the services. However, first-term enlisted attrition has remained at 29 to 39 percent since 1974. For enlistees who entered the services in fiscal year 1992, first-term attrition was 33.2 percent. The Army’s attrition was the highest of all the services, at 35.9 percent, followed by the Marine Corps at 32.2 percent, the Navy at 32 percent, and the Air Force at 30 percent. The highest portion of attrition occurs during the early months of enlistees’ first terms. Of enlistees who entered the services in fiscal year 1992, 11.4 percent were separated in their first 6 months of service. Attrition was fairly evenly distributed over the remaining period of enlistees’ first terms. The rate was 3.4 percent for those with 7 to 12 months of service, 7.3 percent for those with 13 to 24 months of service, 6 percent for those with 25 to 36 months of service, and 5 percent for those with 37 to 48 months of service. On the basis of DOD-provided cost data, we estimated that in fiscal year 1996, DOD and the services spent about $390 million to enlist personnel who never made it to their first duty stations. Of this total cost, which includes the cost of DOD’s training and recruiting infrastructure, about $4,700 was spent to transport each recruit to basic training; to pay, feed, house, and provide medical care for the recruit while at basic training; and to transport the separated recruit home. We estimated that if the services could reduce their 6-month enlisted attrition by 10 percent, their short-term savings would be $12 million, and their long-term savings could be as high as $39 million. DOD and the services need a better understanding of the reasons for early attrition to identify opportunities for reducing it. Currently, available data on attrition does not permit DOD to pinpoint the precise reasons that enlistees are departing before completing their training. While the data indicates general categories of enlisted separations based on the official reasons for discharge, it does not provide DOD and the services with a full understanding of the factors contributing to the attrition. For example, of the 25,430 enlistees who entered the services in fiscal year 1994 and were discharged in their first 6 months, the data showed 7,248 (or 29 percent) had failed to meet minimum performance criteria, 6,819 (or 27 percent) were found medically unqualified for military service, 3,643 (or 14 percent) had character or behavior disorders, and 3,519 (or 14 percent) had fraudulently entered the military. These figures were based on data maintained by the Defense Manpower Data Center and collected from servicemembers’ DD-214 forms, which are their official certificates of release or discharge from active duty. Because the services interpret the separation codes that appear on the forms differently and because only the official reason for the discharge is listed, the Data Center’s statistics can be used only to indicate general categories of separation. Therefore, DOD does not have enough specific information to fully assess trends in attrition. In an attempt to standardize the services’ use of these codes, DOD issued a list of the codes with their definitions. However, it has not issued implementing guidance for interpreting these definitions, and the services’ own implementing guidance differs on several points. For example, if an enlistee intentionally withholds medical information that would disqualify him or her and is then separated for the same medical condition, the enlistee is discharged from the Air Force and the Marine Corps for a fraudulent enlistment. The Army categorizes this separation as a failure to meet medical/physical standards unless it can prove that the enlistee withheld medical information with the intent of gaining benefits. The Air Force and the Marine Corps do not require this proof of intent. The Navy categorizes this separation as an erroneous enlistment, which indicates no fault on the part of the enlistee. To enable DOD and the services to more completely analyze the reasons for attrition and to set appropriate targets for reducing it, we recommended that DOD issue implementing guidance for how the services should apply separation codes to provide a reliable database on reasons for attrition. In the absence of complete data on why first-term attrition is occurring, we examined the various preenlistment screening processes that correspond to the types of separations that were occurring frequently. For example, because a significant number of enlistees were being separated for medical problems and for fraudulent entry, we focused our work on recruiting and medical examining processes that were intended to detect problems before applicants are enlisted. These processes involve many different military personnel. Recruiters, staff members at the Military Entrance Processing Stations, drill instructors at basic training, instructors at follow-on technical training schools, and duty-station supervisors are all involved in transforming civilians into productive servicemembers. The process begins when the services first identify and select personnel to serve as recruiters. It continues when recruiters send applicants to receive their mental and physical examinations at the Military Entrance Processing Stations, through the period of up to 1 year while recruits remain in the Delayed Entry Program, and through the time recruits receive their basic and follow-on training and begin work in their first assignments. Reexamining the roles of all persons involved in this continuous process is in keeping with the intent of the Government Performance and Results Act of 1993, which requires agencies to clearly define their missions, to set goals, and to link activities and resources to those goals. Recruiting and retaining well-qualified military personnel are among the goals included in DOD’s strategic plan required under this act. As a part of this reexamination, we have found that recruiters did not have adequate incentives to ensure that their recruits were qualified and that the medical screening processes did not always identify persons with preexisting medical conditions. We believe that the services should not measure recruiting success simply by the number of recruits who sign enlistment papers stating their intention to join a military service but also by the number of new recruits who go on to complete basic training. We also believe that the services’ mechanisms for medically screening military applicants could be improved. We found that recruiters did not have adequate incentives to ensure that their recruits were qualified. Accordingly, we have identified practices in each service that we believe would enhance recruiters’ performance and could be expanded to other services. Specifically, in our 1998 report on military recruiting, we reported that the services were not optimizing the performance of their recruiters for the following reasons: The Air Force was the only service that required personnel experienced in recruiting to interview candidates for recruiter positions. In contrast, many Army and some Marine recruiting candidates were interviewed by personnel in their chain of command who did not necessarily have recruiting experience. The Navy was just beginning to change its recruiter selection procedures to resemble those of the Air Force. The Air Force was the only service that critically evaluated the potential of candidates to be successful recruiters by judging their ability to communicate effectively and by using a screening test. The Army, the Marine Corps, and the Navy tended to focus more on candidates’ past performance in nonrecruiting positions. Only the Marine Corps provided recruiter trainees with opportunities to interact with drill instructors and separating recruits to gain insight into ways to motivate recruits in the Delayed Entry Program. This interaction was facilitated by the Marine Corps’ collocation of the recruiter school with one of its basic training locations. Only the Marine Corps conducted regular physical fitness training for recruits who were waiting to go to basic training, though all of the services gave recruits in the Delayed Entry Program access to their physical fitness facilities and encouraged recruits to become or stay physically fit. Only the Marine Corps required all recruits to take a physical fitness test before reporting to basic training, though it is well known that recruits who are not physically fit are less likely to complete basic training. Only the Marine Corps’ and the Navy’s incentive systems rewarded recruiters when their recruits successfully completed basic training. The Army and the Air Force focused primarily on the number of recruits enlisted or the number who reported to basic training. Recruiters in all of the services generally worked long hours, were able to take very little leave, and were under almost constant pressure to achieve their assigned monthly goals. A 1996 DOD recruiter satisfaction survey indicated that recruiter success was at an all-time low, even though the number of working hours had increased to the highest point since 1989. For example, only 42 percent of the services’ recruiters who responded to the survey said that they had met assigned goals for 9 or more months in the previous 12-month period. To improve the selection of recruiters and enhance the retention of recruits, we recommended that the services (1) use experienced field recruiters to personally interview all potential recruiters, use communication skills as a key recruiter selection criterion, and develop or procure personality screening tests that can aid in the selection of recruiters; (2) emphasize the recruiter’s role in reducing attrition by providing opportunities for recruiter trainees to interact with drill instructors and separating recruits; (3) encourage the services to incorporate more structured physical fitness training for recruits into their Delayed Entry Programs; (4) conduct physical fitness tests before recruits report to basic training; (5) link recruiter rewards more closely to recruits’ successful completion of basic training; and (6) encourage the use of quarterly floating recruitment goals as an alternative to the services’ current systems of monthly goals. We have also found areas in which the medical screening of enlistees could be improved. Specifically, DOD’s medical screening processes did not always identify persons with preexisting medical conditions, and DOD and the services did not have empirical data on the cost-effectiveness of waivers or medical screening tests. In summary, the services did not have adequate mechanisms in place to increase the likelihood that the past medical histories of prospective recruits would be accurately reported; DOD’s system of capturing information on medical diagnoses did not allow it to track the success of recruits who received medical waivers; the responsibility for reviewing medical separation cases to determine whether medical conditions should have been detected at the Military Entrance Processing Stations resided with the Military Entrance Processing Command, the organization responsible for the medical examinations; and the Navy and the Marine Corps did not test applicants for drugs at the Military Entrance Processing Stations but waited until they arrived at basic training. To improve the medical screening process, we recommended that DOD (1) require all applicants for enlistment to provide the names of their medical insurers and providers and sign a release form allowing the services to obtain past medical information; (2) direct the services to revise their medical screening forms to ensure that medical questions for applicants are specific, unambiguous, and tied directly to the types of medical separations most common for recruits during basic and follow-on training; (3) use a newly proposed DOD database of medical diagnostic codes to determine whether adding medical screening tests to the examinations given at the Military Entrance Processing Stations and/or providing more thorough medical examinations to selected groups of applicants could cost-effectively reduce attrition at basic training; (4) place the responsibility for reviewing medical separation files, which resided with the Military Entrance Processing Command, with an organization completely outside the screening process; and (5) direct all services to test applicants for drugs at the Military Entrance Processing Stations. In its National Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85), the Congress adopted all recommendations contained in our 1997 report on basic training attrition, except for our recommendation that all the services test applicants for drug use at the Military Entrance Processing Stations, which the services had already begun to do. Specifically, the act directed DOD to, among other things, (1) strengthen recruiter incentive systems to thoroughly prescreen candidates for recruitment, (2) include as a measurement of recruiter performance the percentage of persons enlisted by a recruiter who complete initial combat training or basic training, (3) improve medical prescreening forms, (4) require an outside agency or contractor to annually assess the effectiveness of the Military Entrance Processing Command in identifying medical conditions in recruits, (5) take steps to encourage enlistees to participate in physical fitness activities while they are in the Delayed Entry Program, and (6) develop a database for analyzing attrition. The act also required the Secretary of Defense to (1) improve the system of pre-enlistment waivers and assess trends in the number and use of these waivers between 1991 and 1997; (2) ensure the prompt separation of recruits who are unable to successfully complete basic training; and (3) evaluate whether partnerships between recruiters and reserve components, or other innovative arrangements, could provide a pool of qualified personnel to assist in the conduct of physical training programs for new recruits in the Delayed Entry Program. DOD and the services have taken many actions in response to our recommendations and the requirements in the Fiscal Year 1998 Defense Authorization Act. However, we believe that it will be some time before DOD sees a corresponding drop in enlisted attrition rates, and we may not be able to precisely measure the effect of each particular action. While we believe that DOD’s and the services’ actions combined will result in better screening of incoming recruits, we also believe that further action is needed. As of January 1998, DOD reported that the following changes have been made in response to the recommendations in our 1997 report: (1) the Military Entrance Processing Command is formulating procedures to comply with the new requirement to obtain from military applicants the names of their medical insurers and health care providers; (2) the Accession Medical Standards Working Group has created a team to evaluate the Applicant Medical Prescreening Form (DD Form 2246); (3) DOD has adopted the policy of using codes from the International Classification of Diseases on all medical waivers and separations and plans to collect this information in a database that will permit a review of medical screening policies; (4) DOD plans to form a team made up of officials from the Office of the Assistant Secretary of Defense (Health Affairs) and the Office of Accession Policy to conduct semiannual reviews of medical separations; and (5) all services are now testing applicants for drugs at the Military Entrance Processing Stations. We believe that these actions should help to improve the medical screening of potential recruits and result in fewer medical separations during basic training. In its response to our 1998 report on recruiting, DOD stated that it concurred with our recommendations and would take action to (1) develop or procure assessment tests to aid in the selection of recruiters and (2) link recruiter rewards more closely to recruits’ successful completion of basic training. The Office of the Assistant Secretary of Defense for Force Management Policy is planning to work with the services to evaluate different assessment screening tests. This office will also ensure that all services incorporate recruits’ success in basic training to recruiter incentive systems. We understand that DOD plans to form a joint service working group to address the legislative requirements enacted in the National Defense Authorization Act for Fiscal Year 1998. Specifically, the working group will be tasked with devising a plan to satisfy the legislative requirements for DOD and the services to (1) improve the system of separation codes, (2) develop a reliable database for analyzing reasons for attrition, (3) adopt or strengthen incentives for recruiters to prescreen applicants, (4) assess recruiters’ performance in terms of the percentage of their enlistees who complete initial combat training or basic training, (5) assess trends in the number and use of waivers, and (6) implement policies and procedures to ensure the prompt separation of recruits who are unable to complete basic training. We believe that the steps DOD and the services have taken thus far could do much to reduce attrition. It appears that the soon-to-be-formed joint service working group can do more. As the group begins its work, we believe that it needs to address the following six areas in which further action is needed. First, we believe that DOD’s development of a database on medical separations is a necessary step to understanding the most prevalent reasons for attrition. However, we believe that DOD needs to develop a similar database on other types of separations. Until DOD has uniform and complete information on why recruits are being separated early, it will have no basis for determining how much it can reduce attrition. Also, in the absence of the standardized use of separation codes, cross-service comparisons cannot be made to identify beneficial practices in one service that might be adopted by other services. Second, we believe that all the services need to increase emphasis on the use of experienced recruiters to personally interview all potential recruiters or explore other options that would produce similar results. DOD agreed with the general intent of this recommendation but stated that it is not feasible in the Army due to the large number of men and women who are selected annually for recruiting duty and to the geographic diversity in their assignments. While it may be difficult for the Army to use field recruiters to interview 100 percent of its prospective recruiters, we continue to believe that senior, experienced recruiters have a better understanding of what is required for recruiting duty than operational commanders. Third, we believe that an ongoing dialogue between recruiters and drill instructors is critical to enhancing recruiters’ understanding of problems that lead to early attrition. DOD concurred with our recommendation to have recruiter trainees meet with drill instructors and recruits being separated or held back due to poor physical conditioning. However, the Air Force has no plans to change its policy of devoting only 1 hour of its recruiter training curriculum to a tour of its basic training facilities. We believe this limited training falls short of the intent of our recommendation. Fourth, we believe that the services should incorporate more structured physical fitness training into their Delayed Entry Programs. All the services are encouraging their recruits to become physically fit, but there are concerns about the services’ liability should recruits be injured while they are awaiting basic training. DOD is currently investigating the extent to which medical care can be provided for recruits who are injured while in the Delayed Entry Program. Fifth, we believe that, like the Marine Corps, the other services should administer a physical fitness test to recruits before they are sent to basic training. DOD concurred with this recommendation, and the Army is in the process of implementing it. The Navy and the Air Force, however, do not yet have plans to administer a physical fitness test to recruits in the Delayed Entry Program. Finally, we continue to believe that the services need to use quarterly floating goals for their recruiters. DOD did not fully concur with our recommendation on quarterly floating goals. DOD believes that floating quarterly goals would reduce the services’ ability to make corrections to recruiting difficulties before they become unmanageable. We believe, however, that using floating quarterly goals would not prevent the services from managing their accessions. The floating quarterly goals we propose would not be static. Each recruiter’s goals would simply be calculated based on a moving 3-month period. This floating goal would continue to provide recruiting commands with the ability to identify recruiting shortfalls in the first month that they occur and to control the flow of new recruits into the system on a monthly basis. At the same time, such a system has the potential of providing recruiters with some relief from the problems that were identified in the most recent recruiter satisfaction survey. Mr. Chairman, this concludes my prepared statement. We would be happy to respond to any questions that you or the other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed: (1) the historical problem of attrition of enlisted personnel and its costs; (2) the Department of Defense's (DOD) lack of complete data on why enlistees are being separated early; (3) GAO's recommendations on ways to improve the screening of recruiters and recruits; and (4) DOD's actions thus far to respond to GAO recommendations. GAO noted that: (1) despite increases in the quality of DOD's enlistees, about one-third of all new recruits continue to leave military service before they fulfill their first term of enlistment; (2) this attrition rate is costly in that the services must maintain infrastructures to recruit and train around 200,000 persons per year; (3) in fiscal year 1996, the services' recruiting and training investment in enlistees who separated before they had completed 6 months totalled $390 million; (4) solving the problem of attrition will not be simple in large part because DOD does not have complete data on why enlisted personnel are being separated; (5) GAO has concentrated on what it has found to be major categories of separation, such as medical problems and fraudulent enlistments; (6) because these types of separations involve the services' entire screening processes, GAO has reexamined these processes from the time recruiters are selected, through the time that applicants are prescreened by recruiters, through the medical examinations applicants undergo, and through the physical preparation of recruits for basic training; (7) GAO has recommended ways to improve the: (a) data DOD collects to analyze reasons for attrition; (b) services' criteria for selecting recruiters; (c) incentive systems for recruiters to enlist persons who will complete basic training; and (d) services' mechanisms for identifying medical problems before recruits are enlisted; (8) many of these recommendations have been incorporated into the National Defence Authorization Act for Fiscal Year 1998; (9) DOD and the services have already taken some positive steps in response to GAO's recommendations and the National Defense Authorization Act; and (10) however, GAO believes that DOD needs to take further action to change the criteria by which recruiters are selected, provide recruiters with more opportunities to interact with drill instructors, and revise recruiters' incentive systems to improve their quality of life.
|
Several Navy organizations share responsibilities for scheduling, planning, budgeting, overseeing, and setting policy for the repair, maintenance, and modernization of non-nuclear surface ships: The Secretary of the Navy, as directed by the Secretary of Defense, is responsible for conducting, and has the authority under Title 10 of the United States Code to conduct, all the affairs of the Department of the Navy, including overseeing the repair of naval ships. The Chief of Naval Operations is the senior military officer of the Department of the Navy and is responsible to the Secretary of the Navy for the command, utilization of resources, and operating efficiency of the operating forces of the Navy and of the Navy shore activities assigned by the Secretary. U.S. Pacific Fleet and U.S. Fleet Forces Command develop budgets for the operations and maintenance of ships, while also setting requirements for overall fleet readiness. Commander, Naval Surface Force, U.S. Atlantic Fleet and Commander, Naval Surface Force, U.S. Pacific Fleet—the Navy’s surface type commanders—have specific responsibilities for maintaining, training, and ensuring the readiness of their assigned surface ships. In addition, the type commanders have a significant role in scheduling repair planning activities, funding availability work, and coordinating the management and supervision of that work. The Assistant Secretary of the Navy for Research, Development and Acquisition serves as the Navy Acquisition Executive and has authority, responsibility, and accountability for all acquisition functions and programs, including surface ship repair, maintenance, and modernization. The Assistant Secretary also represents the Department of the Navy to the Under Secretary of Defense for Acquisition, Technology and Logistics and to Congress on all matters relating to acquisition policy and programs. Naval Sea Systems Command (NAVSEA) is charged with maintaining ships to meet fleet requirements, while doing so within defined cost and schedule parameters. NAVSEA has the further responsibility of establishing and enforcing technical authority in combat system design and operation. These technical standards ensure systems are engineered effectively, and that they operate safely and reliably. Figure 1 shows how these operating forces and shore-based entities are organized within the Navy. Within NAVSEA, several organizations provide headquarters-based and on-site, local support for surface ship availabilities. Functions these offices perform include contract administration, program management, and planning for future availabilities informed by the historical maintenance needs of Navy ships. Figure 2 highlights the various NAVSEA offices that participate in surface ship availabilities and their responsibilities. The level of complexity of ship repair, maintenance, and modernization can affect the length of a maintenance availability—which can range from a few weeks to more than 6 months—and informs whether the work will be competed among contractors only in the ship’s homeport or competed among all ship repair yards on the East or West Coast. The types of availabilities include the following: Chief of Naval Operations (CNO) availabilities are scheduled to accomplish industrial maintenance and modernization. Industrial maintenance requires complex industrial processes to perform restorative work on a ship, for example, involving structural, mechanical, and electrical repairs. Modernization requirements include changes that either add new capability or improve the reliability of existing systems. For example, the Navy is currently in process of modernizing cruisers and destroyers to upgrade their combat systems. CNO availabilities can last 6 months or longer and are normally scheduled every 2 to 3 years throughout a ship’s service life. To inform the work scope for a CNO availability, Navy officials or contractor representatives typically perform one or more “ship checks” to assess the material condition of the ship in advance of the availability. Continuous Maintenance availabilities are for routine maintenance work, for example, repainting parts of a ship or repairing the nonskid surfaces on a flight deck. These availabilities are normally 2 to 6 weeks in duration and typically scheduled once per non-deployed quarter during a period when the ship will be in port. Emergent Maintenance availabilities are for work of an urgent nature when the risk of prolonged disruption to a ship’s operations makes higher payments for repair acceptable. These availabilities are only completed on an as-needed basis in order to keep a ship operating. For example, in 2015, staff at one regional maintenance center discovered a propeller blade was loose during a contractor’s routine cleaning of an underwater hull of an amphibious ship and immediately arranged for the repairs. In support of its mission to ensure surface ships are mission-ready and able to achieve their expected service life, NAVSEA’s Surface Maintenance Engineering Planning Program (SURFMEPP) has developed a series of products used to support long-term maintenance for ships, focusing on capturing the technical requirements for a class of ships. For example, maintenance plans for a class of ships could identify a need for equipment overhauls, propulsion shaft replacements, and corrosion protection. To identify requirements for a specific ship, SURFMEPP coordinates the development of a “baseline availability work package” with the relevant type commander. This package represents the NAVSEA-mandated technical requirements to ensure a ship reaches its expected service life and meets its operational commitments and is tailored specifically to each ship. Planners then use these requirements as a basis for developing detailed work specifications that direct the ship repair contractor how to perform the work. SURFMEPP also manages the Master Specification Catalog, which is a module within the Navy Maintenance Database that contains information and specifications needed by planners to develop the work specifications for the repair or modernization of a specific surface ship. This catalog is the repository of all work item instructions used to execute contracted depot-level maintenance. Use of the catalog is intended to promote standardization and planning products that reduce costs and increase quality of contracted work. In September 2012, we assessed a Navy readiness strategy, known as the Fleet Response Plan, aimed at improving the readiness of Navy surface combatant and amphibious warfare ships. Our report recognized the Navy had taken steps to alleviate the consequences of deferred maintenance—such as reduced readiness and increased costs once repairs were made—by establishing SURFMEPP and the Commander, Naval Regional Maintenance Center (CNRMC) in 2010 to oversee the operations of the regional maintenance centers. However, we found the Navy had not assessed certain risks to implementation of the strategy, such as staffing shortages at SURFMEPP and CNRMC. We recommended that the Navy develop a comprehensive assessment of the risks the Navy faces in implementing its readiness strategy and develop alternatives to mitigate risks. However, in responding to our recommended actions, the Navy did not agree that a comprehensive assessment of risks was necessary or desirable—stating its view that existing assessment processes were sufficient—and did not take action. The Navy contracts with private shipyards and other firms for the repair, maintenance, and modernization of non-nuclear surface ships. These contractors comprise what is referred to as the ship repair industrial base. The extent of facilities required by a contractor to perform a maintenance availability varies by the complexity of the maintenance requirements. Contractors’ facilities might include shipyards with piers, drydocks, cranes, and separate facilities for pipe-fitting and valve repair. Certain repairs, such as inspecting or repairing the ship’s hull, or removing marine growth from the hull, might require placing a ship in a drydock. Figure 3 shows a drydock and crane. To support the execution of complex maintenance availabilities, the Navy has established a certification process to ensure that contractors are qualified to conduct the work. NAVSEA will grant a “Master Ship Repair Agreement” after certifying a ship repair firm’s capability and capacity to perform all aspects of shipboard work. To obtain this level of certification—the highest the Navy grants for ship repair—the firm must meet certain standards, including having the management, organization, production, and facilities to perform a complex repair. Certified firms must also be capable of subcontracting for elements beyond their capability or capacity, while ensuring that they have adequate oversight of the subcontracted effort. A June 1995 ship depot policy issued by the Secretary of the Navy requires that, whenever possible, ship repair and maintenance work of 6 months or less be performed by shipyards at or near the ship’s home port to improve the crew’s quality of life by reducing their time away from home. If the estimate is more than 6 months, the Navy expands the solicitation to include additional ship repair companies operating on the relevant U.S. coast. Over the years, the Navy has used different contracting strategies with the private sector to support the repairs and modernization for surface ships. Pre-MSMO (before 2004): According to Navy contracting officials, prior to the implementation of the MSMO contracting strategy that has been in place until recently, the Navy generally used firm-fixed-price contracts to contract for the maintenance and modernization of surface ships and used its own planning workforce to draft work specifications. A firm-fixed-price contract provides for a price that is not subject to any adjustment on the basis of the contractor’s cost experience in performing the contract. This contract type places maximum risk and full responsibility for all costs—and resulting profit or loss—on the contractor. It therefore provides maximum incentive for the contractor to control costs. In 1982, we reported on deficiencies with the Navy’s implementation of this contracting strategy for ship repairs. MSMO (2004 to present): The Navy has used the MSMO strategy, which features the use of cost-reimbursement contracts, to contract for ship maintenance work with the private sector. Cost- reimbursement contracts provide for the payment of allowable incurred costs, to the extent prescribed in the contract. Under a cost- reimbursement contract, the government does not contract for the performance of a specified amount of work for a predetermined price, but instead agrees to pay the contractor’s reasonable costs of performance regardless of whether the work is completed. In addition, as part of the MSMO strategy, the contractor responsible for executing the work develops the specifications to which the work was performed. While the Navy initially identified several benefits with the MSMO strategy, including contractor assistance with developing the work package specifications, Navy leadership determined that the business case for the strategy had deteriorated as ship availabilities were incurring excessive cost and schedule growth. MAC-MO (2015 to present): In 2015, the Navy began transitioning to the use of its newest contracting strategy for ship maintenance— MAC-MO—which relies on (1) cost-reimbursement type contracts with a third-party planner (i.e., a contractor other than the contractor performing the actual repair work) to develop work specifications and (2) firm-fixed-price contracts with ship repair contractors to execute availabilities. In addition, the MAC-MO contracting strategy features the use of indefinite delivery/indefinite quantity (IDIQ) contracts for ship repair contractors. IDIQ contracts do not specify exact times for delivery of supplies or services at contract award; those are established via task orders during contract performance. The use of multiple award, IDIQ contracts (contract awards to more than one contractor) and orders is consistent with Department of Defense Better Buying Power initiatives aimed at increasing competition. Shortly preceding implementation of the MAC-MO strategy, in November 2014 the Navy began implementing OFRP—a revision of its earlier Fleet Response Plan outlining fleet training, maintenance, deployment, and sustainment schedules. As we found in a May 2016 report, to meet heavy operational demands over the past decade, the Navy has increased ship deployment lengths and has reduced or deferred ship maintenance, reducing the predictability of ship deployments. In addition, we found that public and private shipyards involved in Navy ship maintenance face a number of challenges in completing maintenance on time, including unanticipated work requirements, workforce inexperience, and workload fluctuations. The OFRP is intended to prioritize maintenance by developing a predictable schedule that allows sufficient time to accomplish needed maintenance tasks and ensure that platforms meet their expected service lives. Our analysis of the key attributes of the MAC-MO contracting strategy versus its MSMO predecessor indicates that the new strategy offers significant potential benefits, key among them being the ability to control contract costs through the use of firm-fixed-price contracts. The Navy has taken several proactive steps, including market research and piloting, which provided insights ahead of the strategy’s implementation. Because MAC-MO is in the early stages of implementation, though, it is too soon to assess the extent to which the new strategy will achieve its objectives. The Navy’s objectives for the MAC-MO contracting strategy are to: maximize competition for surface combatants and amphibious ships improve cost control, quality of workmanship, and schedule adherence, and maintain an appropriate level of flexibility and responsiveness to the fleet. The MAC-MO contracting strategy differs from the previous MSMO strategy in four significant ways, as shown in table 1. The attributes of MAC-MO offer significant benefits as compared to MSMO. The increase in competition opportunities that MAC-MO offers has the potential to help save the taxpayer money, improve contractor performance, and promote accountability for results. MAC-MO contract structures also offer benefits as compared to MSMO. Under MAC-MO’s firm-fixed-price contracts for executing availabilities, prices do not change based on contractor performance, even if the contractor underbids to win the contract. For MAC-MO’s third-party planning contracts, NAVSEA determined that those should be cost-reimbursement type contracts, but that incentives were appropriate to motivate contractor performance. The contracts will feature two types of incentives, incentive fees and award terms. The incentive fees will allow the contractor to earn profit based on the accuracy of its work specifications, adherence to schedule, or both. The award term plan allows the contractor to earn additional option years, exercisable at the government’s discretion, if the government decides the contractor generally performed satisfactorily regarding quality, cost, and schedule. Prior to finalizing the MAC-MO acquisition plan in April 2015, NAVSEA conducted market research to identify how the proposed strategy could promote competition for the award of contracts for third-party planners and for the execution of maintenance availabilities. Market research—the process used to collect and analyze data about capabilities in the market that could satisfy an agency’s needs—is a critical step in the acquisition process, informing key decisions about how best to acquire goods and services. The FAR requires, among other things, that market research be used to promote and provide for full and open competition, and as part of the acquisition planning process, that contract requirements be structured to facilitate competition by and among small business concerns. NAVSEA contracting staff used a variety of market research techniques to inform their analyses, such as holding industry days and publishing requests for information on www.FedBizOpps.gov to gauge industry interest in competing for MAC-MO contracts. As a result of the analyses, NAVSEA: identified potential competition for the execution of complex maintenance availabilities for the six ship classes included in the strategy within the three homeports (Mayport, Florida; Norfolk, Virginia; and San Diego, California) and the East and West Coast- wide competitions, as well as for the third-party planning contracts. determined that two or more capable small businesses existed to justify setting aside noncomplex work to small businesses in the homeports of Norfolk and San Diego, but not Mayport or the coast- wide competitions. made an initial determination that the use of a single award IDIQ, planned for the repair of destroyers, was feasible. However, according to NAVSEA officials, the Navy subsequently decided not to pursue this contracting approach on the basis of two factors. First, single award IDIQs would have required potential contractors to pre-price availabilities years into the future, which industry cited as highly problematic. In addition, NAVSEA found that the use of single award IDIQs would likely undermine its negotiating position with respect to individual modernizations. In addition, in 2014 NAVSEA used San Diego-based pilot availabilities for five ships to test the MAC-MO strategy, and assembled lessons learned. These availabilities, which SWRMC oversaw in San Diego, California, ranged in level of complexity. The Navy also considered lessons learned from earlier maintenance availabilities, particularly the USS Porter in 2013, for which NAVSEA awarded a firm-fixed-price contract for maintenance and collision damage repairs. A mixed maintenance team composed of personnel from SERMC and MARMC provided oversight over the planning and execution of this availability in Norfolk, Virginia. Table 2 identifies the MAC-MO attributes demonstrated during these pilot availabilities. While the pilot ships provided the Navy useful information, the Navy did not test all aspects of the MAC-MO strategy. For example, the pilot was limited to maintenance, modernization, and repair of DDG 51 and CG 47 class ships in San Diego, California. In addition, the cost of the more complex pilot availabilities—the destroyers USS William P. Lawrence and USS Spruance—was relatively low compared to more typical costs for surface combatants, suggesting that the scope of work was much less than a typical CNO availability. Our analysis showed that these availabilities cost about $4.2 million and $3.7 million, respectively, whereas CNRMC data from 2011 to 2014 shows the average cost of a CNO availability for a destroyer to have been about $17 million and a cruiser about $32 million. In responding to our analysis, SWRMC contracting staff said that the type of work conducted in the pilot availabilities was typical of other drydocking availabilities and was ideal because it was small enough to identify potential problems with the proposed strategy without risking significant schedule delays and cost overruns. NAVSEA began implementing the MAC-MO strategy following the Deputy Assistant Secretary of the Navy for Acquisition and Procurement’s approval in May 2014 and April 2015, respectively, of acquisition strategies for acquiring third-party planning services and for execution of the ship availabilities. In February 2015, NAVSEA awarded the first of three third-party planning contracts to QED Systems, Inc. and, in February 2016, NAVSEA awarded the first of the two multiple award IDIQ contracts specifically for complex availabilities in Norfolk, Virginia. To provide a bridge between when the MSMO contracts ended and the award of the MAC-MO contracts, NAVSEA awarded a series of single contracts for the execution of mostly destroyer availabilities, including one that was competed along the East Coast. NAVSEA refers to these as “gap ships.” Nine contract competitions to date have taken place for gap ships homeported in Norfolk, Virginia, and an additional availability was competed along the East Coast. According to NAVSEA officials, they do not anticipate requiring gap ship awards for any ships homeported in San Diego, California or Mayport, Florida. Figure 4 shows the timeline for these gap ship contract awards and other awards related to the MAC-MO strategy. By April 2016, the Navy had awarded all three of the third-party advance planning contracts—all to QED Systems, Inc. While the Navy anticipated competition, it reported that only the Landing Helicopter Assault/Landing Helicopter Deck class third-party advance planning solicitation received multiple offers. As a result, one firm is currently responsible for planning specifications for all of the MAC-MO availabilities. According to Navy staff we interviewed, QED Systems had prior experience drafting work specifications for ship availabilities as a subcontractor for MSMO ship repair contractors. As of August 2016, QED Systems had developed specifications for the Norfolk gap ships USS Normandy and USS Gettysburg and was in the process of planning additional availabilities. NAVSEA has awarded multiple award IDIQ contracts for the execution of complex availabilities in Norfolk and San Diego, and reported MARMC and SWRMC have issued their first orders. In addition, as of August 2016, SERMC had solicited, with the intent to award multiple IDIQ contracts, for the execution of availabilities in Mayport, Florida. MARMC and SWRMC have posted draft solicitations for the award of IDIQs to small business contractors in their respective ports. The Navy has taken steps to mitigate potential challenges as it moves forward with the MAC-MO contracting strategy, primarily by responding to 11 key lessons learned from its pilot availabilities. As of August 2016, the Navy has taken actions that partially address 8 and fully address 3 of those lessons learned. A persistent theme across several of the lessons learned is the need for sufficient staffing within the regional maintenance centers (RMC)—a deficiency that has existed for years, according to NAVSEA officials. In addition, the lessons learned highlight the importance of stabilizing requirements prior to solicitation of firm-fixed- price contracts—a cornerstone of the MAC-MO approach. The Navy has developed new milestones that aim to do so; however, its discipline in performing to these milestones remains largely untested, and it has historically experienced challenges in this area. In addition, although individual RMCs are assessing the outcomes of individual ship availabilities under MAC-MO, many different maintenance community stakeholders are involved and the Navy lacks a coordinated process to evaluate whether implementation of the new strategy is progressing as planned. Based on our analysis of Navy documentation and interviews with NAVSEA officials, we identified 11 key lessons learned stemming from the pilot maintenance availabilities. All but one of these lessons learned focused on the need to mitigate potential challenges associated with MAC-MO’s envisioned use of firm-fixed-price contracts and third-party planners. We considered lessons learned to be key if (1) NAVSEA staff documented them as lessons learned and (2) NAVSEA officials knowledgeable with the pilot ship experiences identified them as significant. According to our analysis, the Navy has made progress towards addressing the lessons learned, fully addressing 3 and partially addressing the remaining 8. Table 3 highlights these lessons learned, Navy actions related to them, and our assessment of the Navy’s actions. The documented need to hire additional staff applied to two attributes— use of firm-fixed-price contracts and use of third-party planners. This issue also surfaced in interviews we conducted at two of the three MAC- MO implementing RMCs—MARMC and SWRMC. MARMC staff we interviewed reported they did not have the staff needed to implement MAC-MO, and SWRMC leaders reported that the San Diego pilot availabilities validated the importance of them moving forward with hiring to approved staffing levels in areas such as specification review. However, firm-fixed-price contracts—such as exist under MAC-MO— generally should require fewer government resources to administer than the cost-reimbursement contracts of the MSMO strategy. For instance, the use of a cost-reimbursement contract requires the contracting officer to determine before the award the contractor’s accounting system is adequate, to perform surveillance during execution to ensure the contractor is exercising effective cost controls, and to employ audits to ensure only allowable costs are being paid. None of these measures is necessary for firm-fixed-price contracts, under which the contractor must perform the specified work regardless of incurred expenses. In response to our question on why additional staff were needed to support the MAC-MO strategy, a senior CNRMC official commented that the current need for additional staffing at MARMC and SWRMC was not a result of the change in contracting strategy to MAC-MO, but rather indicative of persistent staffing shortages that existed under the MSMO strategy. For example, the officials said that although RMC staff reported the need to hire qualified contracting specialists to support the MAC-MO strategy, shortages in this position existed under MSMO because the demands of the job produced high turnover. In 2014, several years after the establishment of the RMCs, U.S. Fleet Forces Command commissioned a study that assessed RMC manning requirements. CNRMC officials stated that this study served as justification for requesting approximately 300 additional staff across the RMCs beginning in fiscal year 2017. However, a senior CNRMC official cited budget constraints within U.S. Fleet Forces Command, which approves RMC budgets, as a limiting factor on how rapidly the RMCs could overcome existing staffing shortfalls. Nonetheless, a CNRMC official stated that MAC-MO might alleviate shortfalls, although it could be years before the impacts are realized. In addition, Navy officials stated that they plan to continuously assess and incorporate lessons learned throughout implementation of the MAC-MO strategy. According to CNRMC officials, one recent example of this learning occurred during the execution of the gap ship availabilities in Norfolk, which identified a need to train contracting staff on how to obtain funding for contract changes when funding for the original contract had been obligated in the prior fiscal year. Since this process was never needed to fund changes to MSMO cost-reimbursement contracts, MARMC staff were unfamiliar with the process. In addition, RMC officials identified a variety of lessons learned after the third-party planning contractor completed its first set of work specifications for the USS Normandy, a gap ship availability. RMC officials reported that although the third-party planner met almost all of the planning milestones, the experiences during the planning process underscored the importance of RMC staff meeting frequently with the planner to discuss and answer questions and review specifications and outcomes of ship checks. Unstable work requirements have historically posed risks to the Navy’s maintenance and readiness goals and hold significant implications for the MAC-MO contracting strategy. Without stable requirements, the third- party planner cannot develop work specifications that reflect the full scope of work needed to be done. In our May 2016 report on OFRP, we found that from 2011 to 2014, on average, surface combatants experienced a 34 percent increase in unanticipated growth in maintenance requirements, resulting in average annual cost growths of $164.8 million. Officials primarily attribute the unanticipated growth and new work to estimating difficulties and high operational tempo, among other reasons. Increases in growth and new work also have consequences for the length of a maintenance availability as RMC staff and contractors need to negotiate contract changes and agree on costs. For example, the Navy reported that from May to October 2015, the median time to process and complete negotiations for new work for surface combatants was 18 days, exceeding the Navy’s standard of 5 days. The MAC-MO San Diego pilot availabilities identified the need for NAVSEA to provide sufficient time to finalize work requirements (known as package lock) before the third-party planner develops the work specifications that accompany the solicitation. Accordingly, in 2015, NAVSEA proposed revised planning milestones for CNO availabilities, which lengthened the amount of time between the start of the planning process and the start of maintenance availability from 360 days (which was the MSMO standard) to 540 days. Although key stakeholder roles remain the same under the new MAC-MO milestones, NAVSEA’s move to lock the work package earlier to allow time for solicitation of the contract has implications for stakeholders who develop modernization and maintenance requirements, as well as stakeholders who verify the accuracy of the work specifications prepared by the third-party planner. For example, the requirements must be locked 175 days, rather than 90 days, before the start of an availability. According to NAVSEA officials, the Navy has not yet formally approved and implemented the revised MAC-MO milestones. NAVSEA officials reported currently using the revised milestones for its firm-fixed-price contracts as it wants to see how they work before formally approving them. See figure 5 for a comparison of the planning milestones for a CNO availability under MSMO and MAC-MO. Senior NAVSEA, type command, RMC, and SURFMEPP officials agreed that development of fully-defined and timely work requirements is needed to support the planning process, but senior RMC officials as well as NAVSEA officials expressed concerns related to the occasionally conflicting goals of the fleet and the maintenance community. As officials explained, the fleet would prefer to wait as late as possible to define the requirements, as for example, ship systems continue to operate—and can thus break—up to the point that a ship enters an availability. Alternatively, the maintenance community prefers to lock requirements early in order to award the contract and support the solicitation of the availability. One senior NAVSEA program official commented that defining requirements later in the planning process was possible under the MSMO contracting strategy because the contract holder was responsible for repairing and maintaining the same ship year after year and could more easily accommodate changes in the scope of work to be completed. According to Navy officials, under MSMO, the contractor could even be tasked with writing requests for contract changes, which was the common practice at MARMC, but not SERMC or SWRMC. Alternatively, under MAC-MO, RMC staff exclusively are expected to develop these requests. Several NAVSEA officials, including RMC officials, and a type command official commented that the use of firm-fixed-price contracts under MAC- MO will force earlier definition of requirements, which necessitates the Navy becoming more disciplined in its planning processes. Further, one RMC contracting official, experienced with the use of firm-fixed-price contracting, commented that one of the biggest challenges to MAC-MO will be making sure stakeholders responsible for developing the requirements are collectively meeting each of the milestones for locking the requirements. Consistently, Navy officials at the various commands we interviewed acknowledged the importance of achieving accurate work specifications for a maintenance availability, as inaccurate work specifications could result in contract modifications, leading to schedule delays and cost growth and thus contravening the goals of MAC-MO. Several senior Navy officials expressed hope about the MAC-MO strategy’s likelihood of success because they said the nature of firm-fixed- price contracts would make the tradeoffs between adding additional work after the start of an availability and adhering to the schedule more apparent, adding discipline to the process. A CNRMC official commented that adding additional work under the MSMO contracting strategy was relatively easy because the type commands and the modernization teams could go straight to the contractor and ask for more work to be done, and the contractors were willing to have new work added. In contrast, under the MAC-MO strategy adding work will be more time-consuming yet transparent because the cost of additional work will need to be negotiated before the work commences. Further, one senior acquisition planning official added that even if the need for new and growth work was identified after the contract was awarded, the government has the option of performing the work later at a subsequent availability provided the additional work is not related to the core functionality of the ship or a safety issue. The Navy has processes in place for evaluating the contract performance of its individual surface ship availabilities, including metrics that measure schedule delays, cost growth, and contract changes associated with growth and new work. This evaluation process, which is centered in the RMCs, has largely carried over from the previous MSMO strategy, although under MAC-MO’s firm-fixed-price contracts, it will not include award fee evaluation board reviews of the availability contractor. In addition, while the CNRMC collectively analyzes the metrics, it is not responsible for determining whether the strategy itself is achieving its objectives. Apart from these availability-specific evaluations, the Navy does not have a systematic process in place to evaluate the extent to which the MAC- MO strategy is meeting its overall objectives and whether risks to its success, such as timely completion of work requirements under the proposed milestones and shortfalls in RMC staffing, have been cooperatively addressed and mitigated by stakeholders within the Navy maintenance community. According to federal standards for internal control, management should design control activities to respond to risks and evaluate if objectives are being met, which involves leadership-level reviews of performance and establishment of performance measures. As we have previously reported, risk assessment can provide a foundation for effective program management because it provides reasonable assurance that such risks are being minimized. As noted above, the Navy faces some challenges to successfully implementing MAC-MO. Greater discipline is required to plan and execute ship availabilities using firm-fixed-price contracts and third-party planners, requiring greater coordination among stakeholders in the fleet and NAVSEA to identify potential risks to the strategy. Achieving stable requirements and specifications requires extensive coordination within the type commands, across NAVSEA offices, and with the third-party planner—an approach the Navy has only demonstrated to a limited extent to date, primarily through its San Diego pilot availabilities. Further, as experiences with the Norfolk gap ships suggest, the Navy is likely to identify additional lessons learned. Without effective coordination across myriad stakeholders within the Navy’s maintenance communities who together are responsible for scheduling, planning, budgeting, overseeing, and setting policy for surface ship availabilities, there is the risk that MAC- MO will not be implemented as envisioned and the potential benefits may not be fully realized. The Navy already recognizes the importance of establishing forums where issues of cross-cutting interest to the fleet and maintenance communities can be addressed. In June 2016, the Navy chartered a committee to identify and address maintenance and modernization requirements for surface ships. This committee, known as the Surface and Expeditionary Warfare Maintenance and Modernization Committee, includes stakeholders from the fleet and shore-based maintenance communities. As stated in the Navy instruction establishing the committee, this coordination is best accomplished through a standing group of knowledgeable and accountable representatives who actively participate in the development and assessment of maintenance and modernization requirements and resourcing solutions. In addition, as SWRMC recommended as part of its lessons learned from the San Diego pilot availabilities, a committee known as Surface Team 1 could have the potential to track the successful aspects of MAC-MO’s implementation and develop metrics to evaluate its performance. The Navy has tasked Surface Team 1, a previously existing committee, whose representatives also include members of the fleet and shore-based maintenance communities, with responsibilities for setting and developing surface ship maintenance and modernization priorities, but has not tasked it with assessing MAC-MO’s implementation. NAVSEA designed the MAC-MO contracting strategy to increase the number of competition opportunities for the maintenance and modernization of surface ships. This goal is achieved through a competitive ordering process for individual availabilities, expansion of the base of potential prime contractors to include small businesses, and greater use of coast-wide—rather than just homeport-specific— solicitations. Aside from these increased competitions, it is too soon to tell what other effects MAC-MO may have on the ship repair industrial base. Navy MSMO contractors in the MAC-MO homeports of Mayport, Florida; Norfolk, Virginia; and San Diego, California stated they have begun taking steps to reduce overhead costs to position them to operate efficiently within a firm-fixed-price contracting environment. Contractor representatives report these steps include reduced investments in training and facilities. The effect of these steps, however, depends in part on factors unrelated to MAC-MO—most notably, the Navy’s ability to provide consistent and stable workloads within these ports. In contrast, non- MSMO contract holders, including small businesses, did not share these concerns since they were accustomed to working in a firm-fixed-price contract environment and maintained less extensive facilities. All of the contractors we interviewed intend to compete for MAC-MO contracts, and several cited potential changes needed to their workforces to prepare for an environment of increased competition under MAC-MO. The MAC-MO strategy expands competition opportunities in three key ways: holders of IDIQ multiple award contracts will compete for orders for noncomplex availabilities are set aside for small businesses, and coast-wide competitions will enable contractors not located in the ship’s homeport to compete for the maintenance availability. NAVSEA officials told us they expect increased competition to reduce the overall cost of ship availabilities, although it is too soon to determine if the Navy will realize these benefits. Details follow on each aspect of planned competition. Under the MAC-MO strategy, more opportunities for contractors to compete for work will exist because the multiple award contract structure allows the Navy to compete orders for each individual availability among the pool of IDIQ awardees. This represents a departure from the MSMO strategy because under MSMO, a single contract is awarded to one contractor to execute availabilities for a class of ship over a 5-year period. Under MSMO two contracts could be awarded for a class of ships—one for maintenance availabilities that required a drydock facility (docker contract) and one for those that did not (non-docker contract). To illustrate the number of IDIQ orders that could potentially be competed under the MAC-MO strategy, we analyzed DDG 51-class destroyer availabilities completed in Norfolk between fiscal years 2010 and 2014. The Navy executed these maintenance availabilities under two different MSMO contracts—a docker contract and a non-docker contract. We performed this analysis because under the MAC-MO strategy, individual availabilities—which were previously covered by a single MSMO contract—could now be competed as individual orders among the pool of IDIQ awardees. As shown in table 4, our analysis indicates that over a 5- year period, the Navy could have realized over 350 competitive orders for the destroyer availabilities it completed in Norfolk, had a MAC-MO IDIQ contract with associated competition opportunities been in place. In addition, NAVSEA officials told us they plan to broaden the pool of potential competitors for IDIQ complex and noncomplex awards by setting up rolling admissions for additional proposals, meaning that qualified contractors can apply to become a part of the pool of IDIQ awardees beyond the initial IDIQ solicitation period. They plan to release a solicitation for rolling admissions in San Diego and Norfolk in early fiscal year 2017. The purpose of the rolling admission is to expand the contractor base for modernization of surface combatants and amphibious ships. A representative from one small business, which did not hold a MSMO contract, told us it would consider applying for an IDIQ award for complex availabilities once the MAC-MO strategy is fully implemented because of the flexibility offered by rolling admissions. The MAC-MO strategy broadens the pool of prime contractors qualified to compete for work in Norfolk, Virginia and San Diego, California by setting aside noncomplex availabilities in those locations for small businesses. Small businesses told us that historically they were more likely to work as subcontractors to MSMO contract holders, offering specialized services such as electrical work, sometimes under a teaming agreement with the prime contractor. Under MAC-MO, small businesses are not required to hold a Master Ship Repair Agreement (MSRA) certification in order to compete for noncomplex availabilities. In March 2016, a NAVSEA official briefed contractors that, alternatively, small businesses competing for noncomplex contracts would be required to have “MSRA-like” capabilities and capacity to successfully compete for the contract. The term “MSRA- like” means that small businesses will be required to have similar management and quality processes to that required of a certified MSRA holder, and the capability to successfully complete typical work requirements associated with continuous maintenance availabilities. As of September 2016, NAVSEA had not awarded any IDIQ contracts for noncomplex availabilities, so it is too soon to tell how NAVSEA will adjudicate this process. Small business representatives we interviewed consistently expressed interest in performing as prime contractors under MAC-MO. Representatives of all seven small businesses we interviewed stated that they plan to compete as prime contractors for noncomplex availabilities— even in Mayport, Florida, where the Navy plans to compete noncomplex availabilities among both small and large businesses. Some small business representatives noted they were likely to continue to act as subcontractors for complex availabilities. Representatives from small businesses identified a variety of factors they would consider on whether to compete for complex availabilities. For example, three small businesses told us it would depend on the nature of the work in a given availability, and more specifically, the facility requirements set forth in the solicitation. Further, 4 of the 7 small businesses we interviewed told us they do not own their own piers or have the dredged water space alongside the piers to berth ships. These businesses told us they typically rely on the Navy’s facilities or those of large contractors to berth the ship so that they can conduct work on the ship. NAVSEA officials told us that it is not their intent for small businesses to perform work at facilities owned by large contractors and that, in general, the Navy will provide pier space for completion of noncomplex availabilities. Navy policy requires that, whenever possible, ship repair and maintenance work of 6 months or less be performed by shipyards at or near the ship’s homeport to improve the crew’s quality of life by reducing their time away from home. Although NAVSEA officials told us they solicited few if any coast-wide availabilities under MSMO, as part of the transition to the MAC-MO strategy, NAVSEA has already competed several availabilities coast-wide and plans to compete nine additional maintenance availabilities along the East and West Coasts from 2017 to 2019. Accordingly, any contractor on either coast with access to a pier and drydock will be able to compete for these availabilities. For example, shipyards in Charleston, South Carolina and Pascagoula, Mississippi would be allowed to compete for East Coast solicitations. NAVSEA officials told us they intend to evaluate the total cost of moving a ship out of its homeport—including fuel and transportation—before making an award for availabilities competed coast-wide, as moving ships from their homeport can be expensive and offset potential savings from the competition. The Navy’s plan to compete nine coast-wide availabilities represents a significant increase over those competed under MSMO, where, according to NAVSEA officials, RMCs competed few if any coast-wide availabilities. Under the statute in effect since 1986 and Navy policy dating back to 1995, if the work will take 6 months or less and there is adequate competition available among firms able to perform the work at the homeport of the vessel, then the contract solicitation must be limited to only homeport firms. Contract solicitations for work taking longer than 6 months generally must be competed coast-wide. According to NAVSEA officials, under MSMO, the availabilities were planned to be shorter than 6 months. Navy officials offered various reasons as to why availabilities under MSMO were planned to be completed in less than 6 months. Because MSMO contracts typically provided for 5 years of planned availabilities for a ship class within a given homeport, NAVSEA officials told us estimates of availability durations regularly had to be made years before the actual work requirements were known. Nevertheless, by planning the availabilities to be less than 6 months, NAVSEA did not need to compete them coast-wide (as it would have under the 1986 statute) and move the ship out of its homeport. In one instance, though, we found a MSMO contract that included options to cover any instances of work anticipated to take longer than 6 months, such as extended modernization availabilities. Therefore, the period of performance of these availabilities would have exceeded the 6-month limitation if the options had been exercised. NAVSEA and RMC officials told us that, in general, RMC contracting staff have been opposed to moving a ship out of its homeport because of the potential negative effects on sailor morale and the anticipated costs of moving the ship. NAVSEA officials reported taking two separate actions to clarify the homeport exception to coast-wide competitions. First, NAVSEA officials recognized that the Navy homeport policy does not use the term “work”, which is included in the current statute. Specifically, the homeport policy does not define the scope of work included in an availability or when measuring of that work (estimating the number of days needed to execute the availability) should take place. NAVSEA officials stated they are drafting a revision to the 1996 homeport policy and that this draft revision will define the term “work” as meaning “work for the overhaul, repair, or maintenance of a naval vessel”. Additionally, the Navy’s proposed policy revision will require that 540 days prior to the start of an availability, the Navy identify how work days will be measured for that availability. In addition, NAVSEA officials told us they are developing a legislative proposal to increase the 6-month exception to coast-wide competitions to allow for a longer period before they have to do a coast-wide competition because availabilities with modernization packages now regularly exceed 6 months, unlike in the past. A variety of factors, including the Navy’s level of demand for maintenance and repair work at each of the three homeports in our review, will determine how the MAC-MO strategy might affect the industrial base, if at all. The possibility exists that some firms may choose to exit or enter the market, but it is too soon to tell how the MAC-MO contracting strategy might affect the industry’s capacity to meet the Navy’s long-term needs, especially since fluctuations in the Navy’s workload forecasts could also affect industrial base conditions within individual homeports. CNRMC officials told us they expect a predictable repair and maintenance workload in the homeports of Mayport, Florida; Norfolk, Virginia; and San Diego, California in future years, although this workload is cyclical in nature as it was under the MSMO strategy. Various factors, including the deployment of ships, can affect the demand for work in each of the homeports. For example, according to a Fleet Forces Command official, an upswing in workload for surface ships is expected in Norfolk as deployed ships move back into their homeport during fiscal year 2018. Similarly, the Navy plans to homeport newly constructed surface ships in San Diego, providing an upswing in future workload there as these new ships come in for maintenance and repairs. However, the Navy could make other decisions that could affect a homeport’s industrial base, such as when the Navy relocated three amphibious ships from Norfolk, Virginia to Mayport, Florida in fiscal year 2014. See figures 6, 7, and 8 for the Navy’s recent historical and forecasted workload in these three ports. Generally, former MSMO contract holders we interviewed in Norfolk and San Diego expressed less concern about the transition from MSMO to MAC-MO than they did the Navy’s ability to provide stable workloads in their ports, irrespective of contract type. In May 2016, we found that wide swings in port workload can have a negative effect on the private-sector industrial base, and various factors can affect those workloads. Further, Navy documents show that OFRP will drive changes to the maintenance cycles for carrier and expeditionary strike groups and, in turn, cause significant fluctuations in port workloads, which could affect the industrial base’s ability to hire and retain a skilled workforce. Navy officials stated that they have begun to take steps to ensure that ships that comprise a carrier or expeditionary strike group—including non-nuclear surface ships, such as destroyers, cruisers, and amphibious ships—stagger their maintenance start and stop timelines, which would alleviate, in part, the concerns that industry cited. Former MSMO and non-MSMO contractors offered various views on the potential effects of the MAC-MO strategy on the industrial base, primarily related to the need for contractors to compete for orders after the award of the IDIQ multiple award contracts. In part, these views are shaped by the various types of facilities—such as drydocks and piers—that an individual contractor maintains. According to two former MSMO contractors, these facilities represent significant capital investments on the part of the contractor, which then relies on sustained Navy workloads to fund their maintenance. Figure 9 highlights the characteristics of selected contractors we interviewed across the three ports where the Navy is implementing the MAC-MO strategy. Contractors we interviewed commented on potential challenges and changes they are making to prepare for the increase in competition opportunities under the MAC-MO fixed price approach. Five former MSMO contractors told us they are working to reduce their overhead costs in order to remain competitive in a firm-fixed- price environment. In general, under the 5-year MSMO cost- reimbursable contract, they stated they had confidence that they would receive regular workload from the Navy for a given class of ships. This confidence underpinned investments they made in maintaining and upgrading their facilities and training their workforces. Under MAC-MO, which will require competition for every availability within a homeport, these contractors do not have similar confidence or visibility into future work. Consequently, three MSMO contract holders told us they are laying off staff and reducing training programs to remain competitive. These layoffs are in addition to ones in 2015 and 2016 reported by several Norfolk contractors and attributed to the decrease in workload in that port, which was unrelated to the MAC- MO strategy. Four MSMO contract holders also told us they are eliminating apprenticeship programs for workers. Further, one contractor told us that it may cease dredging the water surrounding its drydock to reduce its overhead costs, which would eliminate certain classes of ships being serviced in that port. Because the Navy only recently implemented MAC-MO, whether these reductions actually occur and, if so, their net effect on the industrial base’s capability and capacity to respond to the Navy’s maintenance needs remains indeterminate. Non-MSMO contractors told us that they are accustomed to working under firm-fixed-price contracts, having served as prime contractors for the Military Sealift Command, commercial companies, and small- scale NAVSEA availabilities. However, six of the non-MSMO contract holders we interviewed were small businesses with varying experience working as a prime contractor for the Navy. Representatives from one small business told us that the type of contract does not change the type of work to be completed. Representatives from four small businesses told us they are making changes to become more competitive under MAC-MO, such as realigning staff positions to reduce the company’s overhead costs. Both former MSMO holders and non-MSMO holders rely on full-time and temporary laborers to conduct work on Navy availabilities. Three MSMO contract holders told us they have laid off skilled laborers in response to decreases in work and may have to rely on temporary laborers to complete certain availabilities. One contractor told us that it is harder to secure and incentivize temporary laborers to complete requested work on time. Contractors also have the option of hiring new, untrained laborers into their workforces, but these individuals require time to train and become proficient at their trades, which can reduce work efficiencies in the near-term. Two contractors also expressed concern about finding, training, and retaining qualified, skilled laborers when new contracts are secured under MAC-MO. Navy officials told us they anticipated certain workforce reductions within the private sector under the firm-fixed-price contract structure. Representatives from all of the companies we interviewed told us they plan to compete for work under the MAC-MO strategy; for many, the Navy is their primary customer. For instance, former MSMO contract holders in Norfolk reported they rely on Navy work for at least 97 percent of their revenue. However, nine of the companies we interviewed across the three MAC-MO ports reported that they diversify their Navy workload with work from other government customers and commercial work, and three would consider competing for other work should they not have a Navy contract in hand. For example, in San Diego, one former MSMO contract holder reported less than 60 percent of their revenue coming from the Navy. In Mayport, one small business contractor reported more than 40 percent of its revenue coming from commercial and other government customers and signaled an intention to shift more resources into commercial work if it did not secure a MAC-MO contract. Small businesses who are dependent on the Navy for work, and do not own drydocks or piers, told us they plan to aggressively compete for non- complex work. In addition, three of the four small businesses in Norfolk told us they depend on the Navy for more than 75 percent of their revenue. Three Norfolk small businesses told us they have relocated personnel to Mayport in order to compete for Navy availabilities there. In developing its MAC-MO contracting strategy, the Navy has taken a thoughtful approach that builds on the promising results from its pilot availabilities by incorporating lessons learned, and establishing milestones that promote the timely definition of work requirements in availabilities. These steps reflect an upfront recognition on the part of the Navy that the practices and processes it employed to manage availabilities under cost-reimbursement, MSMO contracts would likely prove untenable under firm-fixed-price, MAC-MO contracts. However, the implementation process does not end there. Additional learning is likely to take place as the Navy orders ship maintenance availabilities under MAC- MO. New aspects of the strategy will be tested, as will the discipline of the Navy’s fleet and shore-based maintenance communities to adhere to the MAC-MO milestones they have set. Further, the actions the ship repair industrial base takes to adapt to MAC-MO will become more evident, as will any potential implications. Harnessing new lessons learned, and ensuring key stakeholders are committed to their implementation, can position the MAC-MO strategy for success. The Navy has not put in place such a process for MAC-MO. Particularly in light of the large and complex nature of ship repair stakeholders in the Navy, not ensuring that progress is systematically assessed and that new lessons learned are incorporated in a timely manner could undermine the Navy’s ability to obtain the improved cost, schedule, and quality outcomes it seeks under the new strategy. To realize MAC-MO’s benefits, the Navy will need information to decide on how to make adjustments to the strategy. The existing committees—Surface Team 1 or the Surface and Expeditionary Warfare Maintenance and Modernization Committee—could provide a starting point. In order to promote effective implementation of the MAC-MO contracting strategy, we recommend that the Secretary of Defense direct the Secretary of the Navy to complete the following action: Assign responsibility to a single entity comprised of representatives from the fleet and shore-based maintenance communities, such as Surface Team 1, to perform systematic assessments of MAC-MO’s implementation that include the following: Review of lessons learned and identification of changes to Navy processes, including staffing, needed to support the MAC-MO strategy, Evaluation of performance against anticipated cost, schedule, and quality objectives, as outlined in the MAC-MO acquisition strategy, and Input and recommendations from all Navy parties that participate in the scheduling, planning, budgeting, oversight, and policy development for the repair, maintenance, and modernization of non- nuclear surface ships. We provided a draft of this product to DOD for comment. In its written comments, reproduced in appendix II, DOD concurred with our recommendation on the need to provide systematic assessments of the MAC-MO strategy implementation. To address our recommendation, the Navy will identify criteria to be used to perform the assessment, identify appropriate stakeholders, identify which entity is best positioned to perform the assessment, and submit biennial reports beginning in December 2017 to the Director, Defense Procurement and Acquisition Policy in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. We are sending copies to appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In 2015, the Navy transitioned to the Multiple Award Contract-Multi Order (MAC-MO) contract strategy for the maintenance and modernization of surface ships. This report assesses (1) the potential benefits of the MAC- MO contracting strategy, (2) process changes the Navy has taken to address any challenges and to capitalize on anticipated benefits, and (3) how the strategy might affect the Navy’s ship repair industrial base. To assess the potential benefits of the MAC-MO strategy, we analyzed acquisition planning and contract documentation and interviewed senior Naval Sea Systems Command (NAVSEA) officials about the strategy, including staff from the Deputy Commander for Surface Warfare. To determine the key differences between the MAC-MO and the Multi-Ship, Multi-Option (MSMO) contracting strategies in contract pricing, planning the work, ordering, and structuring the competition among ship repair contractors, we analyzed NAVSEA’s acquisition planning documentation for the MAC-MO strategy and reviewed contents of selected MSMO contracts the Navy identified as illustrative, most recent, or were still in a period of performance and MAC-MO contract documentation for third- party planning contract awards. We also considered applicable Federal Acquisition Regulation (FAR) provisions describing the conditions under which firm-fixed-price and cost-reimbursement contracts are appropriate. To identify the Navy’s rationale on how to proceed with the new strategy, we analyzed acquisition planning documentation to understand how NAVSEA applied the results of its market research as prescribed by the FAR. To further our understanding of NAVSEA’s decision to proceed with the MAC-MO strategy, we examined the characteristics of ship availabilities used to pilot features of the strategy, for example, the use of firm-fixed-price contracts and use of indefinite delivery, indefinite quantity (IDIQ) contracts and interviewed Southwest Regional Maintenance Center (SWRMC) in San Diego, California, who administered the pilot contracts. We also interviewed senior NAVSEA officials, including the Commander, Navy Regional Maintenance Center (CNRMC) staff, and contractors with experience in executing ship availabilities, to obtain their perspectives on the strategy. To identify the progress the Navy had made as of September 2016 in implementing the MAC-MO strategy, including the “gap ship” contract awards, we interviewed and obtained information from the Mid-Atlantic Regional Maintenance Center (MARMC) staff in Norfolk, Virginia and senior NAVSEA contracting staff, and analyzed supporting contract documentation. To assess process changes the Navy has made to address any challenges and to capitalize on anticipated benefits, we analyzed Navy documentation containing assessments of lessons learned from pilot maintenance availabilities used to test key features of the MAC-MO strategy. We identified a total of 18 lessons learned based on our assessment the Navy’s documentation of the San Diego pilot and USS Porter maintenance availabilities. We categorized 11 of the lessons- learned as key because they were also identified as lessons-learned in one or more interviews with NAVSEA officials knowledgeable about the pilot ship experiences. We excluded 7 lessons that did not meet this additional criterion. We interviewed Navy officials responsible for availability funding and oversight, contract administration, and program management pertaining to the MAC-MO contracting strategy and pilot availabilities. These offices included the Deputy Commander for Surface Warfare; Commander, Naval Surface Force, Atlantic; Commander, Naval Surface Force, Pacific; CNRMC; MARMC in Norfolk, Virginia; SWRMC in San Diego, California; and the Southeast Regional Maintenance Center (SERMC) in Mayport, Florida. To assess the Navy’s progress in taking actions to address potential challenges posed by the 11 key lessons learned, we evaluated Navy documents, including staffing and training plans for the contracting workforce across the RMCs, proposals for revised planning milestones, strategy and planning documents, and the contents of contracts for the third-party planner. We also interviewed Navy contracting, maintenance, and program management officials previously mentioned. To assess the extent to which the Navy has taken actions, we developed the following three-point scale: Not Met—The Navy has not taken any action to respond to identified lessons learned. Partially Met—The Navy has taken some action to respond to the identified lessons learned, but has not completed the action needed to address the identified risk. Met—The Navy has completed the action needed to address the identified lesson learned. To identify roles and responsibilities for planning maintenance availabilities, we reviewed procedural documents to ascertain the lead offices that administer, plan and coordinate Navy availabilities, including organizations that oversee repair and modernization efforts at private shipyards. In addition, as previously discussed, we interviewed officials responsible for planning and implementing the strategy. To describe the extent of maintenance overruns and their impact on the Navy, we used information from a previous GAO report that analyzed ship maintenance data from fiscal years 2011 to 2015, which included availabilities conducted before and after Optimized Fleet Response Plan implementation, to ascertain the extent to which maintenance availabilities for surface combatants had been completed on time. To identify the extent to which the Navy has made provisions to assess implementation of the strategy and if it is meeting its goals, we interviewed senior NAVSEA officials on whether performance metrics had been developed to assess the strategy and if an organization had been assigned responsibility. We used federal internal control standards to determine if the Navy appropriately defined objectives related to the contracting strategy; assessed its internal processes to identify risks related to the strategy, including the development of performance measures; and created strategies to mitigate those risks. To assess how the MAC-MO contracting strategy might affect the ship repair industrial base for surface ships, we examined the ways in which the strategy had the potential to increase competition opportunities and how the contractors within the industrial base might respond to these opportunities. To understand how IDIQ multiple award contracts and how setting aside noncomplex work to small businesses might promote competition, we identified how MAC-MO and MSMO contract provisions differed, as previously described, and also obtained the perspectives of NAVSEA officials and selected contractors. To understand the potential of IDIQ multiple award contracts for increasing competition, we selected two contracts reflective of the work—DDG 51 class ships in the homeport of Norfolk, Virginia—that would to be included under the MAC-MO strategy. To do so, we analyzed documentation listing the availabilities completed under two MSMO contracts—one contract requiring a drydock and one not requiring a drydock—to ensure we covered the range of availabilities that could be covered by a MAC-MO complex and noncomplex contract. We analyzed the data for consistency and completeness, although we did not trace the data to the original contract documentation. Since the purpose of this analysis was to illustrate how the number of competitive opportunities contract awards could increase under an IDIQ contract for one class of ships, and the Navy’s maintenance needs can change year by year, the results are not generalizable to other availabilities or future time periods. In addition, to understand how the Navy intends to promote the use of coast-wide competitions, we interviewed NAVSEA policy officials about the application of the Navy’s June 1995 Ship Depot Maintenance Solicitation Policy and obtained data from CNRMC on the use of such competitions under MSMO. To identify the Navy’s projected workload for non-nuclear surface ships in the homeports of Mayport, Florida; Norfolk, Virginia, and San Diego, California, where the MAC-MO strategy will be implemented, we obtained data from CNRMC from fiscal years 2015 through the end of 2020. The CNRMC estimated these trends based on an analysis of needed staffing resources, including data housed in the Navy Database Environment. Since the purpose of our analysis was to show the Navy’s projections in anticipated port workload, we did not conduct our own assessment of the accuracy of this data. We excluded data on the coast-wide competitions from our analyses because these availabilities could be executed in ports other than the ship’s homeport. To obtain the perspective of contractors from the three homeports where the MAC-MO contracting strategy will implemented, we conducted semi- structured interviews to obtain viewpoints from selected 14 contractors. We identified 30 contractors which (1) held MSMO contracts as prime contractors under the MSMO contracting strategy, (2) the Navy identified as potential competitors in the MAC-MO acquisition plan, and (3) the Navy identified as potential competitors in its market research documentation. From these 30 contractors, we selected 14 contractors that represented a mix of these categories. Specifically, the 14 contractors included 6 former MSMO contract holders and 8 non MSMO contract holders, which comprised 7 small businesses and 7 large businesses. We used data provided by the Navy to verify if the selected contractors met the Navy’s small business certification requirements. We conducted 10 semi-structured interviews in person and 4 by teleconference. The viewpoints of the 14 contractors are nongeneralizable to all contractors which conduct and perform work under Navy maintenance, repair, and modernization contracts. Further, we used a data collection instrument to collect information from each of the selected 14 contractors on their facilities, workforce, and sources of revenue. For example, we gathered information on what types of facilities the contractor owned, such as a drydock or a pier, the number of the contractor’s full-time staff, and the percentage of revenue from entities other than from the Navy. We verified that contractors did or did not have drydocks for 10 of the 14 contractors during our onsite contractor visits. We did not verify the number of full-time staff that the contractor employed or the contractor sources of revenue. We conducted this performance audit from September 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named, Christopher R. Durbin, Assistant Director; Pedro A. Almoguera; Peter Anderson; Sonja Bensen; Jessica M. Berkholtz; Lorraine R. Ettaro; Kurt S. Gurka; Cale T. Jones; Charles T. Schartung; Leslie G. Stubbs; and Roxanna Sun made key contributions to this report.
|
The Navy has over 150 non-nuclear surface ships that it repairs, maintains, and modernizes using privately owned shipyards. The Navy concluded in 2010 that readiness of the surface ship force was below acceptable levels. This, in addition to the concerns of leadership about cost and schedule growth, led to a revised readiness strategy and, in 2015, introduction of a new contracting strategy for ship repair, referred to as MAC-MO. House Report 114-102 accompanying the fiscal year 2016 National Defense Authorization Act included a provision for GAO to review the Navy's implementation of the MAC-MO strategy. This report assesses (1) the potential benefits of the MAC-MO contracting strategy, (2) process changes the Navy has made to address any challenges and to capitalize on anticipated benefits, and (3) how the strategy will potentially affect the Navy's ship repair industrial base. GAO analyzed the Navy's acquisition planning documentation, lessons learned, and contracts. GAO interviewed Navy officials and visited regional maintenance centers in Norfolk, Va.; San Diego, Calif.; and Mayport, Fla. GAO also interviewed previous and prospective Navy ship maintenance contractors. The Navy's Multiple Award Contract, Multi Order (MAC-MO) contracting strategy for ship repair offers a number of potential benefits compared to the former Multi Ship, Multi-Option (MSMO) contracting strategy, including increased competition. A key difference is that the MAC-MO strategy intends to control costs through the use of firm-fixed price contracts and the use of third-party planners, which could be cost-effective if the planner produces clearly defined work specifications for the repair contractor to price and execute. Prior to implementation of the new strategy, the Navy conducted market research and pilot-tested attributes of the strategy with pilot maintenance periods for a number of ships. The Navy recognized several lessons learned from its pilot maintenance periods and has made subsequent process changes to address key lessons and support MAC-MO. These include a longer time frame for the planning process for finalizing work requirements (see figure). According to the Navy, this additional time is needed to promote stable requirements and, therefore, pricing. The Navy is assessing outcomes of individual maintenance periods; however, it lacks a systematic process involving the fleet- and shore-based maintenance communities to assess overall implementation of MAC-MO. This is inconsistent with federal standards for internal control, which state that management should evaluate its response to risks and evaluate progress made toward program objectives. Not ensuring progress is systematically assessed—particularly in light of the many stakeholders involved—could undermine the Navy's ability to obtain the improved outcomes it seeks with the MAC-MO strategy. The MAC-MO strategy will increase competition opportunities and set aside work for small businesses, but it is too soon to determine how these changes will impact the ship repair industrial base. Industry viewpoints GAO collected on MAC-MO varied both by shipyard location and contractor size. However, former MSMO contract holders reported that the uncertainty associated with the need to continually compete for work could result in decisions to reduce their workforce and facilities. Small businesses GAO spoke with have in the past mostly performed work as subcontractors to MSMO contract holders, but many expressed interest in competing as prime contractors under MAC-MO. GAO recommends the Navy assign responsibility to a single entity to systematically assess implementation of the MAC-MO strategy. DOD agreed with GAO's recommended action and plans to report biennially on strategy implementation.
|
U.S.-flag fleet participants in cargo preference food aid shipments comprise two general categories of carriers: charter service and liner service. The cargo preference and Maritime Security Programs are intended to support both as part of the U.S.-flag fleet. These programs are administered by MARAD, while the food aid programs are administered by USAID and USDA. Vessels in the privately owned U.S.-flag fleet engaged in international commerce can be placed into two general categories: charter service and liner service. While most non-MSF carriers provide charter service, most MSF carriers provide liner service, as shown in table 1 below. Most charter service vessels are operated by non-MSF carriers. Charter service means that vessels are hired to carry a cargo to specific ports at a specific time; these vessels do not provide regularly scheduled service on a fixed route but typically carry a shipload of cargo for only one or a few customers at a time. Charter service is primarily provided by bulk, break- bulk, and tug-barge vessels that can carry either bulk or bagged cargo. Bulk vessels are designed to carry dry bulk commodities, such as rice or wheat, in large interior holds. The benefit of bulk shipments is the economies of scale that can be gained from shipping large amounts of a single commodity. Figure 1 shows a photograph of a bulk vessel. Break-bulk vessels are general cargo ships that are designed to carry nonuniform items packaged as single parcels or assembled together on pallet boards. Bagged commodities are stacked and secured within interior holds of the ship. Tug- barge vessels have a tugboat or towboat that propels a separate barge by pushing or towing it. Barges generally carry bulk or break-bulk cargo, although some also carry containerized cargo. Most MSF vessels are liner service vessels. Liner service means that vessels have regularly scheduled sailings on fixed routes. These vessels typically carry small amounts of cargo for many customers at one time and will sail, even if not completely full. Liner service is primarily provided by containerships that carry bagged cargo; they do not carry bulk cargo. Containerships are designed to carry cargo in standard-size, preloaded containers that are stacked next to and on top of each other on the ship. The benefit of containers is that they permit rapid loading and unloading and efficient transportation of cargo to and from the port area. Containers facilitate intermodal transportation because they can be loaded by the supplier and sealed, taken by truck or railcar to the port, then loaded onto the containership without the cargo being handled. In the case of food aid, generally the suppliers do not load the containers but instead ship bagged commodities by rail or truck to the port of loading, where they are loaded into the containers. Figure 2 shows a photograph of a containership. Liner service is also provided by Lighter Aboard Ships (LASH), which are barge- carrying vessels that use barges like containers. They are also intermodal because the barges can use rivers and canals to pick up and drop off cargo at interior loading docks. The cargo preference and Maritime Security Programs are both intended to bolster the U.S.-flag market share in international commerce, as well as to ensure the availability of an adequate number of U.S.-flag ships and U.S.- citizen mariners in the event of a national defense need. The cargo preference laws are part of the overall statutory program to support the privately owned and operated U.S.-flag commercial fleet, or merchant marine. DOD and MARAD consider the merchant marine vital to U.S. national security, providing essential sealift capability in wartime. The ships that carry these cargoes also provide jobs for American seafarers who are available in time of national emergency to crew the sizable fleet of reserve government vessels. As an agency of the Department of Transportation, MARAD’s responsibilities include promoting the development and maintenance of the U.S. merchant marine. It administers both the cargo preference and Maritime Security Programs. The Maritime Security Program is more targeted than the cargo preference program in terms of the vessels that can participate. It is intended to guarantee that certain kinds of militarily useful ships and their crews will be available to DOD in a military contingency. Under the renewed program starting in 2005, DOD must approve the proposed vessels as militarily useful. The program’s main focus has been to enable globally competitive carriers that operate militarily useful vessels to enter or keep U.S.-flag status. Most MSF vessels are containerships, operated by some of the largest containership carriers in the world. For instance, MSF carriers Maersk Sealand, P&O Nedlloyd, and APL were ranked among the top four containership carriers by volume as of May 2001, according to the Bureau of Transportation Statistics. These containership carriers have intermodal systems that also come as part of the package, allowing DOD to benefit from private sector global transportation and communication networks. According to MARAD, these networks include not only vessels, but also logistics management services, infrastructure, terminals and equipment, communications and cargo-tracking networks, and thousands of trained, professional U.S.-citizen mariners and shoreside employees located throughout the world. The Maritime Security Program also results in the reflagging of new and more efficient vessels to U.S. registry for participation in MSF. The program requires that vessels be less than 15 years old to participate (except that LASH vessels can be 25 years old). From its implementation in 1996 through 2002, a total of 18 modern commercial liner vessels, with an average age of less than 9 years, were reflagged to U.S. registry for participation in MSF, according to MARAD. USAID and USDA’s Foreign Agricultural Service are responsible for administering the food aid programs that provide humanitarian food assistance to countries in need. The food aid programs had an annual average budget of $1.97 billion during fiscal years 1999 to 2003, according to USDA. The primary mechanism through which the U.S. government implements its international food assistance initiatives is P.L. 480. Food assistance provided under P.L. 480 is delivered to foreign countries through three separate programs: Titles I, II, and III. USDA administers Title I, which provides for government-to-government sales of agricultural commodities to developing countries on credit terms or for local currencies. USAID administers Titles II and III. Title II provides for donation of U.S. agricultural commodities to meet emergency and nonemergency food needs in other countries, and it is by far the largest of the food aid programs. Title III provides for government-to-government grants to support long-term growth in the least developed countries but has been inactive in recent years. In addition to P.L. 480, food aid is provided through three smaller programs administered by USDA’s Foreign Agricultural Service: Food for Progress, section 416(b), and the McGovern- Dole International Food for Education and Child Nutrition Program. The cargo preference and Maritime Security Programs both provide incentives to retain privately owned U.S.-flag ships and their U.S.-citizen mariners for commercial and national defense purposes. Cargo preference makes available a protected market that provides the economic incentive for vessel owners to pay the higher costs associated with the U.S. flag and employ U.S.-citizen crews. We found that a total of 190 privately owned U.S.-flag vessels carried cargo preference food aid shipments at some point during the fiscal year 1999 to 2003 period. In addition, the Maritime Security Program provides a subsidy for MSF carriers with particular militarily useful vessels. MSF currently has 47 ships, of which 37 have participated in cargo preference food aid shipments. DOD strongly supports both programs and said it has benefited from both during the recent wars in Afghanistan and Iraq. Preference cargoes are intended to provide the economic incentive for vessel owners to pay the higher costs associated with U.S.-flag registry and employ U.S.-citizen crews. According to MARAD, due to high U.S. labor costs; safety, health, and environmental regulations; and taxes, it is more expensive for vessels to be U.S.-flagged. For instance, U.S.-flag vessels generally incur higher labor costs due to higher manning level requirements, as well as higher wages and benefits for U.S.-citizen mariners. The cargo preference laws, by guaranteeing the availability of cargo to U.S.- flag ships, contribute to the financial viability of U.S.-flag vessel operating companies, thereby helping to ensure that the vessels, trained crews, and vessel service industries continue to exist, according to MARAD. The cargo preference program provides this incentive by reserving a portion of the U.S. market for U.S.-flag vessels, despite the higher prices they typically charge. In the food aid transportation market, a minimum of 75 percent of food aid shipments must be shipped on U.S.-flag vessels. The U.S.-flag vessels (both MSF and non-MSF) participating in cargo preference food aid shipments during fiscal years 1999 to 2003 comprised a variety of vessel types. According to our analysis of USDA data, a total of 190 individual vessels participated in food aid shipments at some point between 1999 and 2003. This included 111 bulk, break-bulk, tug-barge, and tanker vessels that provided charter service and 79 containership, LASH, and other vessels that provided liner service. These vessels were operated by 38 carrier companies. We found that the level of dependence on food aid varied significantly among carriers. We interviewed representatives of 15 of the top carriers that participated in U.S.-flag cargo preference food aid shipments during 1999 to 2003, comprising 77 percent of food aid revenues. Of the 10 non- MSF carriers we interviewed that generally provided charter service, 4 said that 60 percent or more of their annual revenues came from food aid shipments, 3 said between 20 and 50 percent, and 3 said less than 10 percent came from these shipments. Most of the five MSF carriers we interviewed that provided liner service said that food aid revenues comprised a small percentage of their total revenues. The Maritime Security Program was authorized for fiscal years 1996 to 2005 and provides about $100 million in annual funding for up to 47 vessels to participate. Each participating vessel receives an annual subsidy payment of $2.1 million, intended to partially offset the higher operating cost of keeping these vessels under U.S.-flag registry. In November 2003, Congress passed another 10-year authorization for the Maritime Security Program, starting in fiscal year 2006, that would expand the program from 47 to 60 vessels. Annual subsidy payments were increased from a flat $2.1 million payment to an escalating payment of $2.6 million for fiscal years 2006 to 2008, $2.9 million for fiscal years 2009 to 2011, and $3.1 million for fiscal years 2012 to 2015, always subject to the availability of congressional appropriations. According to MARAD officials and MSF carrier representatives we interviewed, the combination of MSF subsidy and access to cargo preference shipments, including food aid shipments, enables these containership carriers to stay in MSF and creates incentives to reflag newer vessels. While most MSF carriers primarily carry commercial cargo, MSF carrier representatives said that they need both MSF subsidy and cargo preference food aid shipments to offset the higher costs of operating as a U.S.-flag vessel. MARAD stated in its 2002 annual report that the current $2.1 million subsidy represents about 13 percent of the cost of operating a U.S.-flag vessel. According to a MARAD official, the subsidy only partially offsets the higher cost of employing U.S.-citizen mariners. However, during the interviews, MSF carrier representatives said that the subsidy was important to them because it was a guaranteed monthly payment that provided a level of financial stability. MSF currently comprises 47 vessels operated by 12 companies, based on data as of December 2003. These vessels include 38 containerships, 1 LASH, and 8 roll-on/roll-off vessels. Of the vessels currently participating in MSF, 36 containerships and 1 LASH vessel participated in cargo preference food aid shipments during fiscal years 1999 to 2003. Approximately 2,162 mariners are employed on these ships, according to MARAD. (See app. III for a profile of the current MSF participants.) DOD strongly supports both the cargo preference and Maritime Security Programs. DOD officials said that DOD’s priority is to maintain or increase the current level of U.S.-flag ships and mariners and, therefore, it strongly supports both programs. Through the cargo preference and Maritime Security programs, an additional manpower pool is maintained that DOD can draw on to crew the reserve fleet. DOD officials said that the Maritime Security Program, in addition to guaranteeing militarily useful U.S.-flag ships and trained U.S.-citizen mariners, provides access to MSF liner carriers’ intermodal systems, which is important to DOD. In testimony before the House Armed Services Committee on October 8, 2002, General John W. Handy, Commander of the U.S. Transportation Command, strongly supported reauthorization of the Maritime Security Program. He stated that DOD limited its sealift fleet to those assets that the commercial sector could not provide, so that only 33 percent of the vessels DOD may require resided in its own fleet. The remainder of the sealift capacity, needed to transport military equipment and supplies, came from the commercial sector. DOD officials said that it had benefited from both programs during the Afghanistan and Iraq wars. For example, during Operation Iraqi Freedom, DOD did not need to pull MSF vessels out of their normal commercial service. Instead, it chartered two MSF roll-on/roll-off vessels for DOD use and used the other MSF vessels in their normal commercial routes, where appropriate, to meet its needs, according to a DOD official. This official said that DOD preferred to leave MSF vessels in their normal commercial service because then DOD would also be able to benefit from use of their global intermodal systems. MSF carriers may have had to displace some commercial cargo but otherwise continued business as usual. During the period January 1 to October 14, 2003, MSF vessels made 135 vessel voyages of cargo to sustain the Iraqi deployment. This effort included 35 MSF containerships. These vessels transported a total of 8,668 twenty-foot equivalent units (TEU), according to DOD data. According to MARAD, more than 7,500 merchant mariners served in Operation Iraqi Freedom. Of these, about 1,470 mariners served on MSF vessels, based on a DOD estimate. MSF and non-MSF carriers compete only for bagged food aid shipments because MSF vessels do not carry bulk food aid. Although the majority of food aid continues to be shipped as bulk cargo, bulk food aid shipments decreased from fiscal years 1999 to 2003, partly because of changes in food aid spending. The recent decline in bulk cargo has caused some non-MSF bulk carriers to rely more on bagged cargo. Non-MSF carriers transported about 55 percent and MSF carriers 45 percent of bagged food aid shipments from fiscal years 1999 to 2003. Cargo preference requirements affect whether agencies award bagged food aid shipments to MSF or non-MSF carriers. Most of the bagged food aid cargo carried by MSF and non-MSF vessels is loaded for export at U.S. ports in the Gulf of Mexico. Although on average approximately 67 percent of food aid was shipped as bulk cargo and 33 percent as bagged cargo from fiscal years 1999 to 2003, the share of food aid shipped as bagged cargo generally increased during these years. This change was due mostly to a decline in USDA’s purchases of bulk agricultural commodities for the food aid program. Although USDA purchases of bulk commodities remained relatively stable from 1996 to 1998, they increased dramatically in 1999 and then declined steadily from 1999 to 2003. As the procurement data in figure 3 show, purchases of bulk commodities decreased from 5.76 million tons in 1999 to 2.39 million tons in 2003. However, purchases of bagged commodities equaled about 2 million tons each year during this period. Thus, the percentage of commodities procured that were bagged increased from 26 percent in 1999 to 46 percent in 2003. Changes in food aid spending from fiscal years 1999 to 2003 have contributed to this shift from bulk to bagged cargo. The largest food aid program is administered under P.L. 480 Title II, which experienced an increase in funding from 2000 to 2003. The P.L. 480 statute requires that at least 75 percent of agricultural commodities donated for development, or nonemergency, purposes be value-added. Value-added commodities are shipped as bagged cargo, as opposed to bulk. Aside from Title II development assistance, many of the commodities donated by the Food for Education and section 416(b) food aid programs also have been shipped as bagged cargo in recent years, according to USDA officials. However, spending for the P.L. 480 Title I food aid program generally declined from 1999 to 2003. Most commodities sold under the P.L. 480 Title I program are shipped as bulk cargo, such as wheat, corn, and soybeans. Many non-MSF carriers depend on cargo preference food aid shipments for a large share of their business; therefore, the decline in bulk cargo has meant increased reliance on bagged cargo shipments. According to the interviews we conducted with non-MSF carriers, some non-MSF carriers that traditionally ship bulk food aid reacted to the decline in bulk food aid shipments by increasing their participation in bagged food aid shipments. Figure 4 shows that while total shipments of food aid by non-MSF vessels decreased over the fiscal year 1999 to 2003 period, the decline in bulk food aid shipments was partially offset by an increase in bagged food aid shipments. Among non-MSF carriers that have shipped bulk food aid, 43 percent have also shipped bagged food aid. MSF and non-MSF vessels combined carried a total of 6.73 million metric tons of bagged food aid cargo and earned an average of $430 million each year from bagged food aid shipments from 1999 to 2003. MSF vessels carried about 45 percent of this cargo and non-MSF vessels carried 55 percent. However, non-MSF carriers’ share of the bagged food aid market was clearly greater in 2002 and 2003 than in the previous 3 years, as shown in figure 5. The MSF cargo was shipped by five companies: four operating 42 containerships and one operating 5 LASH vessels. Each MSF containership carried an average shipment of 950 tons per voyage, and each MSF LASH vessel carried an average shipment of 22,440 tons per voyage. The non-MSF cargo was shipped by 38 companies that operated 143 vessels. Each non- MSF vessel carried an average shipment of 1,750 tons of bagged cargo per voyage, almost twice the average shipment of each MSF containership. Cargo preference requirements affect the results of competition between MSF and non-MSF carriers for food aid shipments. One requirement that has tended to favor non-MSF carriers is MARAD’s interpretation of U.S.-flag service for the cargo preference program. Figure 6 outlines the criteria agencies are required to follow when awarding shipments subject to cargo preference laws. As the figure indicates, an ocean carrier that offers to carry preference cargo on a U.S.-flag vessel can be counted as either Priority 1 or Priority 2 service. For example, a U.S.-flag vessel qualifies for Priority 1 service if it offers to transport preference cargo on a U.S.-flag vessel or transship the cargo to U.S.-flag vessels for the entire portion of the waterborne voyage. However, a U.S.-flag vessel would qualify for Priority 2 service if it transshipped the cargo to a foreign-flag vessel for any leg of the voyage. In the absence of Priority 1 service availability, agencies may also count Priority 2 as Priority 1 service by default. Most non-MSF vessels qualify for Priority 1 service because they offer the food aid program charter service entirely on a U.S.-flag vessel. However, vessels that operate in liner service, such as MSF containerships, often qualify for Priority 2 service because they transfer shipment of (transship) food aid cargo to a foreign-flag vessel for a leg of the voyage. In some locations, however, some MSF carriers have started to transship food aid cargo to prepositioned U.S.-flag vessels instead of foreign-flag vessels so that they can qualify as Priority 1 service. In fact, as figure 7 shows, liner vessels that carried Title II food aid cargo from fiscal years 1999 to 2003 qualified for Priority 1 service about 48 percent of the time. Liner vessels counted as Priority 1 service by default about 23 percent of the time and Priority 2 or 3 service about 29 percent of the time. Under the cargo preference program, agencies are required to award food aid shipments to carriers that offer Priority 1 service over carriers that offer Priority 2 or 3 service, even if the freight rate charged by the carrier offering Priority 1 service is higher, unless the rate exceeds MARAD’s fair and reasonable rate calculation. Other cargo preference requirements tend to favor MSF carriers. An example of a requirement that has benefited MSF carriers is section 17 of the Maritime Security Act of 1996. This provision allocates up to 25 percent of the total tonnage of Title II bagged food aid cargo each month to Great Lakes ports. Moreover, shipments of this cargo are awarded to carriers without regard to the flag of the vessel offering service and therefore are not subject to MARAD’s priority rules. From fiscal years 1999 to 2003, MSF vessels and foreign-flag vessels carried an estimated total of 221,000 tons and 379,000 tons of this cargo, respectively. MSF carriers have shipped much of this cargo because they have incorporated certain Great Lakes ports facilities into their intermodal networks. They have created a system for transporting this cargo intermodally in containers by rail to U.S. ports on the East and West Coast, where the cargo is ultimately exported. MSF carriers have been successful in winning much of this cargo because these intermodal shipments allow them to offer competitive freight rates, according to USAID and USDA officials. However, non-MSF carriers ship this cargo less often than MSF carriers because they generally lack access to the intermodal infrastructure that enables MSF carriers to move this cargo efficiently. U.S. Gulf ports handled about 70 percent of the average annual tonnage of bagged food aid cargo carried by MSF and non-MSF vessels from fiscal years 1999 to 2003. Table 2 shows the tonnages of bagged food aid cargo loaded by MSF and non-MSF vessels at major food aid ports. As the table indicates, the ports of Lake Charles and Jacintoport handled 1.72 million tons and 1.43 million tons of bagged food aid cargo from 1999 to 2003, respectively. These two ports handle bagged food aid mostly as break-bulk, or noncontainerized, cargo. Lake Charles is an agricultural port that is also the only U.S. port approved by USDA to store prepositioned commodities for the food aid program. Jacintoport has an automated cargo handling system capable of loading large tonnages of bagged food aid into break- bulk vessels and bulk vessels at a high rate of speed. Lake Charles will soon have a similar machine with like capabilities. MSF carriers do not load food aid directly into their vessels from these two ports. Instead, they hire stevedores to stuff the food aid cargo into containers and then move the containers intermodally by barge or rail to nearby ports that have container terminals where they have regularly scheduled service, such as the ports of New Orleans and Houston. MSF carriers run a similar operation from the Port of Chicago, where most of the Title II bagged food aid cargo subject to section 17 of the Maritime Security Act of 1996 is loaded. The Port of Chicago handled on average an estimated 35,000 tons of bagged food aid cargo for MSF carriers each year from 1999 to 2003. Much of this cargo was transported intermodally by rail to major U.S. container ports, such as Charleston, South Carolina; Norfolk, Virginia; and Seattle, Washington. Our analysis of data from program agencies and carriers suggests that establishing a bagged tonnage limitation could reduce MSF vessels’ market share in food aid, but the extent will depend on the limitation level and the options MSF carriers have in responding to it. Using recent data, we examined daily limits of 7,500, 5,000, and 2,500 tons and found that the percentage of MSF food aid voyages affected rises from 3 percent at a limit of 7,500 tons to 19 percent at a limit of 2,500 tons. Almost all voyages above 7,500 tons were on the specialized LASH vessels, of which only one remains in MSF. Total annual food aid for MSF containerships on voyages above a 2,500-ton limit was around 160,000 tons. However, setting a limit at this level may not mean a reduction of 160,000 tons that MSF vessels carry, to the extent they are able to continue to carry some food aid on affected voyages, replace some food aid with other cargo, and forfeit their subsidy for food aid shipments that are sufficiently profitable. A simulation analysis we performed for MSF containerships suggests that, at a limit of 2,500 tons for example, the total annual decrease in food aid carried by these vessels could, under certain assumptions incorporating those options, range from about 17,000 to about 63,000 tons. Structured interviews with the carriers suggest that considerations such as vessel sharing arrangements could also affect the outcome and impacts on non-MSF carriers may depend on their market niche. Further, if the terms of MSF and non-MSF carriers’ participation in cargo preference change, program agencies are concerned that they could face increased delivery delays, administrative burdens, and shipping costs. The major food aid ports would generally experience a limited impact on their overall port activity from a bagged tonnage limit, although specific food aid terminals could potentially be affected, depending on the extent of any limitation and the MSF carriers’ responses to it. While more than 80 percent of MSF food aid voyages fall below a 2,500-ton limit, establishing a limit at 2,500 tons would be substantially more constraining for the majority of the fleet than limits at 5,000 or 7,500 tons. According to USDA data from fiscal years 2001 to 2003, only 3 percent of MSF food aid voyages carried more than 7,500 tons, almost all of which occurred on the five LASH vessels that have participated in the MSF. However, another 16 percent of MSF food aid voyages carried food aid tonnage between 2,500 and 7,500 tons. All of these voyages occurred on containerships, which comprise the majority of current MSF vessels. Figure 8 shows the number of MSF food aid voyages at different tonnage levels. The average annual tonnage carried by both MSF LASH vessels and containerships on voyages in excess of 2,500 tons was around 322,000 tons, of which around 160,000 tons were carried on the containerships. Similar to the percentage of MSF food aid voyages, the share of MSF food aid revenues affected by a tonnage limit rises substantially as the level is decreased. As shown in figure 9, 37 percent of MSF total food aid revenues were earned on voyages carrying more than 7,500 tons of food aid while 68 percent of MSF total food aid revenues were earned on voyages with more than 2,500 tons of food aid. In comparison to the percentage of voyages affected, these revenue shares reflect that MSF voyages above a potential tonnage limit are earning proportionally more food aid revenues than those with smaller cargo volumes. MSF food aid revenues earned on the primarily LASH vessels that carried more than 7,500 tons were around $26 million annually, or $8.5 million per vessel. Not including LASH vessels, MSF food aid revenues earned on containerships that carried more than 2,500 tons were around $22 million annually, or $1.3 million per vessel. Nonetheless, while these data indicate how often an MSF vessel could be restricted by a tonnage limitation, they indicate the potential loss in revenue from food aid only under the assumption that MSF carriers were to no longer carry any food aid on these voyages. The actual food aid tonnage and net revenue impact for MSF vessels under a tonnage limitation will depend on options available to the carriers and how they respond to them. Numerous considerations relating to market conditions, food aid logistics, and carrier characteristics would ultimately shape the impact of any tonnage limitation. We identified three factors to explicitly consider in an analysis of a tonnage limitation. Each of these factors, under certain assumptions, has the potential to make the impact of a tonnage limit on MSF vessels smaller than suggested by the share of MSF voyages affected. First, affected MSF vessels might be able to carry some food aid, potentially up to the level of the limit, and may not have to give up the entire tonnage for that voyage to keep their subsidy. This situation can occur if a carrier can bid on a portion of an offered shipment or if the food aid tonnage on a voyage comprises multiple shipments, such that the carrier could bid on those shipments providing tonnage under the limit. For example, a carrier that would normally have a voyage with 3,700 tons of food aid may be able to carry two food aid contracts for 1,000 tons each and, under a tonnage limit of 2,500 tons, face a potential loss of food aid cargo of only 1,700 tons. Second, depending on market conditions, affected MSF carriers may be able to replace a portion of the food aid above the limit with commercial or nonfood preference cargo, diminishing their loss in total revenues. For example, if an MSF vessel were to carry 1,700 tons less food aid due to a tonnage limit, the carrier may be able to replace a portion of that tonnage with nonfood aid cargo. Third, there may be occasions when carrying food aid cargoes above a tonnage limit is more profitable than reducing food aid to receive the subsidy for a voyage, thus providing an incentive to carry food aid above the limit. This may occur for food aid shipments that are particularly large or earn a particularly high freight rate such that an affected MSF carrier might choose to carry the food aid, even if it entailed forfeiting the subsidy otherwise earned during the days of that voyage, as well as forgoing any net revenues from available replacement cargo. For example, if an MSF vessel were to normally carry 7,000 tons of food aid on a voyage that lasted 15 days, the carrier would have to give up a subsidy payment of around $107,000 to carry that entire tonnage. The carrier might choose to forfeit the subsidy payment if the net revenues from the food aid effectively above the 2,500-ton limit exceeded the $107,000 plus potential net revenues from replacement cargo. To illustrate the impact of a tonnage limitation when accounting for these three factors, we created a simulation model that suggests ranges of possible tonnage and net revenue changes for MSF vessels at different tonnage limits. The model uses estimates of average freight rates, average cargo volumes, and average vessel costs for voyages from fiscal years 2001 to 2003, and includes probability distributions that reflect certain assumptions about carrier options and behavior. Table 3 provides the annual average simulation estimates for MSF containership voyages that carried more than 2,500 tons of food aid. The estimates illustrate that, under the assumption that carriers could respond to a tonnage limit in the ways we have discussed, impacts on MSF vessels could be reduced. While the total food aid tonnage on voyages affected by the limit is around 160,000 tons, to the degree that carriers can keep some food aid on voyages where the total food aid tonnage has been above the limit, the amount of food aid that they could lose due to the limit would, under certain assumptions, range from around 61,000 to 138,000 tons. This food aid tonnage that is effectively above the limit would correspond to estimated net revenues of around $7 million to $16 million. Based on our assumptions about how much other cargo MSF carriers are able to secure to replace the food aid, MSF net revenues from additional cargo might range from an estimated $200 thousand to an estimated $4 million. Based on our assumptions about net revenues for food aid and other cargo, the food aid tonnage above the limit that MSF vessels continue to carry might range from an estimated 24,000 to an estimated 102,000 tons. The net revenues from this food aid tonnage minus the forfeited subsidy payments would then range from around $3 million to $9 million. Taking all three factors into account, the total decline in MSF net revenues under a limitation of 2,500 tons of food aid might range from around $2 million to $5 million a year. On a per vessel basis, this amounts to roughly $120,000 to $270,000. By estimating the food aid tonnage effectively above the limit and subtracting the tonnage that MSF vessels might continue to carry while forfeiting their subsidy payments, the annual food aid tonnage available to non-MSF carriers might range from around 17,000 to 63,000 tons. Impacts on carriers could fall toward the ends of the simulation ranges reported in table 3 or, in some cases, outside those ranges if carrier options and responses differ from those simulated. An important consideration is that certain key assumptions in the simulation are based on information from fiscal years 2001 to 2003. To the extent that future market conditions differ from those reflected in recent years, or carriers respond in different ways than we have considered, the impacts of a tonnage limitation could be affected. For example, if future food aid program levels decline, then the overall tonnage and revenue changes from a shift in MSF’s food aid market share would also likely decline. If, however, future nonfood preference cargo levels decline, then MSF may be able to replace a smaller share of the food aid tonnage above a limit with other cargo, and the revenue impacts from a tonnage limit would be greater. If MSF carriers decide never to carry food aid above a limit—even when it is profitable to do so, net of a forfeited subsidy payment—then the total decline in food aid tonnage they carry and the revenue loss to MSF vessels would increase. MSF carriers told us they face certain logistical constraints that challenge them in being able to effectively respond to a tonnage limit at any level. One challenge is the difficulty in planning vessel tonnages around a limit. The MSF carriers cited their lack of control over when they receive food aid cargoes from suppliers, which makes it difficult to distribute the food aid tonnage onto vessel sailings to stay under the limit and meet delivery deadlines. They stated they could face additional expenses for cargo storage at ports as well as loading penalties and charges for delayed delivery. MSF carriers also cited the fact that they may bid on multiple food aid shipments concurrently as a complication in planning vessel tonnages around a limit. A second challenge cited by two MSF carriers was that they have agreements with other carriers to share space on their vessels that could be at risk if a carrier is concerned with shipping food aid above a certain tonnage. USDA and MARAD corroborated this view and expressed concern that eliminating vessel-sharing agreements would increase inefficiency in the market. A third challenge noted by some MSF carriers was that the costs they incur to maintain an infrastructure to support food aid cargo might become too high if their food aid tonnage should be reduced. Such infrastructure might include a U.S.-flag vessel stationed abroad to transfer food aid from major ports to more remote destinations or the container loading operations some MSF carriers have set up in the Great Lakes region. The ability of the non-MSF carriers to benefit from a tonnage limitation would depend on their market niche, according to our interviews with 10 non-MSF carriers. The simulation we discussed above suggests that, under certain assumptions, the additional bagged food aid available to all non- MSF carriers might range from less than 1 percent to 8 percent of this segment’s current bagged tonnage for tonnage limits at 5,000 and 2,500 tons, respectively. However, each non-MSF carrier’s ability to bid for and win that cargo would be differentially affected by (1) whether it carries bagged food aid; (2) whether it services food aid destinations where cargo has become available; and (3) the tonnage of cargo available, compared with its vessel capacity. For example, while seven of the non-MSF carriers we interviewed said they would benefit from a tonnage limitation, two non- MSF carriers said they would be unaffected because they do not carry bagged food aid. Three non-MSF carriers supported a lower tonnage limitation limit, but two of them mentioned being constrained by the geographic routes they service in their ability to pick up new business. Moreover, two other non-MSF carriers responded that a lower tonnage limitation would actually hurt them because it would encourage MSF carriers to more intensely compete in their market niche that services smaller shipments. Three other non-MSF carriers with larger vessels were satisfied with a higher tonnage limitation because it would reduce MSF competition in the market niche for large shipments. The impact of a bagged tonnage limitation on program agencies is hard to predict and will ultimately depend on the degree to which both MSF and non-MSF carriers alter the terms in which they participate in the Maritime Security Program and cargo preference. According to DOD, a tonnage limit could cause some MSF carriers to withdraw from the Maritime Security Program, though DOD officials indicated that they expect to receive more applications for the next program than available slots. USDA, USAID, and MARAD also reported several concerns about a tonnage limit at any level. These concerns include: Decreased food aid timeliness: USDA and USAID noted concern that food aid shipments could be delayed if the non-MSF vessels do not have sufficient capacity to quickly carry the additional food aid shipments above a limit or if MSF carriers responded to the limitation by spreading food aid tonnage over several sailings to stay under the limit. Increased administrative burdens: USDA and USAID noted concern about additional administrative burdens if MSF carriers responded to the limit by submitting partial bids or dividing up shipments and if non- MSF carriers increasingly submitted bids for bagged cargo that were contingent on getting numerous contracts in order to fill larger vessels. MARAD noted concern that a tonnage limit would negatively affect their initiative to implement service contracts. Additionally, both food aid agencies and MARAD will face the administrative burden of having to track volumes on a voyage basis—something they do not currently do. Increased shipping costs: USDA, USAID, and MARAD noted concern over the possibility of increased freight rates if (1) non-MSF carriers raised prices in response to decreased competition, (2) freight rates bid by non-MSF vessels for contracts that would have otherwise been carried by MSF vessels are higher because of charter service rather than liner service or because the non-MSF carrier may not regularly sail to that location, (3) MSF carriers raised prices in response to losses associated with carrying less food aid, receiving fewer subsidy payments, or incurring costs for delayed delivery charges, and/or (4) freight rates bid by MSF carriers for food aid shipments below a tonnage limitation are higher than their bid otherwise would have been for a larger shipment due to the loss of economies of scale. A bagged food aid tonnage limitation on MSF carriers generally would have limited impact on the overall activity of the major food aid ports, based on our analysis of the food aid shipment data and the interviews we conducted with port representatives. Some ports may experience some shift in the type of food aid handled, which could affect participating terminals within these ports. For example, if bagged cargo shipments by MSF containerships were seriously affected, this would likely have a greater impact on terminals that predominantly stuff bagged food aid cargo into containers. However, any impact would depend on the extent of the limitation imposed and the MSF carriers’ responses to it. The major Gulf ports, from which most food aid cargo is shipped, would likely experience little impact if a bagged food aid limitation were imposed on MSF carriers because they service both MSF and non-MSF vessels that carry bagged cargo. The result would be a shift among their customers, according to port officials. The ports of Lake Charles and Jacintoport are specialized agricultural commodity ports that handle only bagged food aid cargo, and they anticipate that any loss of bagged cargo by MSF carriers would likely be picked up by the non-MSF carriers who are their biggest customers. The ports of New Orleans and Houston service both MSF and non-MSF customers at different terminals. These large ports handle all kinds of cargo in addition to food aid shipments, and port officials anticipate that there would be no net loss in overall business for the port. While officials said there could be some impact on individual terminals, they estimated that the terminals could likely replace any lost food aid container cargo with other container cargo. The large coastal container ports like Norfolk, Charleston, and Seattle would also likely experience little impact since the volumes of food aid cargo involved comprise a very small percentage of their total business. For instance, Norfolk handled 12 million tons of cargo in 2003, of which 70,000 tons were food aid that they stuffed into containers at the port, according to a port official. Norfolk did not track how much food aid it handled that was already in containers. The port official in Charleston estimated that food aid was about 1 percent of its cargo, while the official in Seattle estimated about 3 to 4 percent. These ports would experience little impact from a bagged tonnage limitation on MSF carriers, according to these officials. However, they said that the terminals at the ports that stuff the bagged food aid into the containers might be affected. A port of Chicago representative expressed the greatest concern about the potential impact from a tonnage limitation, although food aid cargo is a small portion of Chicago’s total cargo. MSF carriers transport most of the section 17 cargo reserved for Great Lakes ports, and the majority of it goes through the port of Chicago. The port official said that the port valued food aid cargo because it was relatively more labor intensive and generated more jobs than other types of cargo. The preponderance of food aid cargo going through the port of Chicago is handled at a single terminal, whose business could be damaged, depending on the nature and extent of the impact. If MSF carrier participation in food aid shipments is severely curtailed and other carriers do not step in to carry section 17 cargo, the terminal could be seriously affected. In addition, the port official said that one benefit of the section 17 provision was that it helped make Midwestern commodity suppliers more competitive when their cargo could be loaded in nearby Great Lakes ports instead of being transported to Gulf ports. The official said that the regional effort to encourage more food aid commodity sourcing from Midwestern suppliers could also be affected if section 17 bagged cargo was curtailed. While we make no recommendations in this report, we believe that our analysis provides important insights into the nature of competition between MSF and non-MSF carriers for food aid shipments. A sharp drop in bulk food aid shipments in fiscal years 2000 and 2001 suggests that competition for bagged food aid has become more intense. In those years, MSF carriers captured a large share of the business, but market share of bagged shipments shifted toward the non-MSF carriers in fiscal years 2002 and 2003. The two segments of the industry appear to be finding ways to respond to the changes in food aid, but this time frame is too short to determine any clear trends. We also believe that our analysis of a potential limit on the MSF carriers’ food aid shipments provides some findings that are not obvious without a close examination of the system. One finding is that, if MSF carriers have certain options in responding to a tonnage limit that would mitigate the impacts of that limit, the potential decline in food aid shipments by this group would be less than the total volume of food aid carried on voyages over the limit. This result would occur if MSF vessels carry some food aid up to the limit on affected voyages, and in some cases choose to forfeit subsidy payments in favor of carrying profitable shipments above the limit. To the extent that MSF carriers do choose to carry food aid over the limit and forfeit the subsidies, a tonnage limit may not lead to a large shift in food aid shipments and financial benefits to non-MSF vessels. Where any financial effects of food tonnage limitations would accrue remains uncertain. For example, MARAD subsidy payments could be lower if MSF carriers continue to carry food aid. However, food aid agencies could face higher costs if the limits resulted in fewer and more expensive options for some shipments, and these agencies have emphasized their concerns that additional constraints on food aid shipments could impede their ability to provide food aid to meet critical humanitarian needs. Finally, it is important to recognize the limits of any effort to predict the future course of events in an area in which key factors are so volatile. For example, the volume of food aid shipments has varied greatly over recent years, and the relationship between food aid and export subsidies is also under discussion in the WTO negotiations. The outcome of MARAD’s efforts to support the two key maritime sectors is clearly influenced by the level and composition of food aid, so long-term trends and even fluctuations in food aid shipments will affect the program. Second, the importance and profitability of food aid, compared with other commercial or preference cargo, has a large influence on the health of the various firms and components of the industry, and the volume and prices for these alternative cargoes can also change significantly. In these cases, firms may decide to move vessels into or out of the program, which will have an effect on the existing operators. USAID, USDA, and DOD provided written comments on a draft of this report, which are reproduced in appendixes IV, V, and VI. USAID stated that we used sound and logical methodologies to analyze the data and accurately identified trends pertaining to MSF and non-MSF carriers that carried food aid over a 5-year period. USAID agreed that predicting the impact of a tonnage limitation is difficult and said it takes a cautious approach to changes, citing concerns regarding impacts on administrative systems and the ability to meet foreign assistance objectives. USDA said that the report adequately summarized USDA’s major concerns over the impact on food aid programs that could result from a bagged cargo tonnage limitation placed on MSF carriers, including decreased food aid timeliness, increased administrative burdens, and increased shipping costs. DOD generally concurred with our findings. It stated that it would oppose any change in cargo preference that would adversely impact the U.S. merchant marine because it believed there would be negative impacts on DOD mobilization capabilities. DOT provided oral comments. DOT said that the draft report provided a thoughtful analysis of the potential impact of tonnage limits on food aid shipments and how they might affect the U.S.- flag shipping industry. However, DOT identified issues with some factors, and the way they are considered, in the simulation model we used to estimate the range of impacts from different tonnage limits. In addition, USAID and DOT provided technical comments, which we incorporated in the report as appropriate. DOT officials, including the Director of MARAD’s Cargo Preference Program, said that they identified issues with some factors and the way they are considered in the simulation model used in our analysis of potential impacts of a tonnage limitation. In particular, these officials suggested that the draft report and its simulation model could have more thoroughly explored the effects of three factors: (1) MSF carriers’ ability to replace food aid cargo with commercial cargo, (2) the industry’s reluctance to carry cargo over the limit and forfeit the subsidy, and (3) the logistical constraints on carriers’ ability to operate under a low tonnage limit. Specifically, with respect to replacing food aid cargo, DOT officials questioned whether sufficient commercial cargo is actually available in the marketplace to replace food aid cargo for MSF vessels. With respect to carrying cargo above the limit and forfeiting the subsidy, DOT emphasized that all five MSF carriers stated they would not give up their subsidy to carry food aid. DOT officials stated their view that logistical limitations, which would further constrain MSF carriers’ ability to carry food aid shipments under low tonnage limits, may be underestimated in the model. While the DOT officials recognized these factors are acknowledged in the draft report as limitations on the model’s predictive value, they emphasized their view that the cumulative effect of more thoroughly exploring them in the model might have led us to conclude that the imposition of tonnage limits could be more detrimental to MSF than the results otherwise indicated. As a result, the officials suggested the model’s limitations be more extensively and prominently recognized in the body of the report. Finally, the DOT officials emphasized their agreement with the aspect of our observations that the imposition of any tonnage limit on MSF vessels could drive up costs for the food aid program and decrease efficiency by limiting competition and increasing freight charges. We agree that MSF carriers may face constraints in terms of their options in responding to a tonnage limitation. Specifically, we agree that carriers may have restricted flexibility in managing contract amounts to keep food aid shipments below limits and still carry food aid, and in replacing lost food aid with other cargo. Our simulation analysis specifically incorporates uncertainty in these factors, and we have modified our report language in several places to clarify the range of assumptions concerning those and other variables, and the implications of the uncertainty regarding our results. Additional detail about how these factors are treated in our analysis is presented in the following paragraphs. With respect to whether carriers would in some cases carry food aid above a tonnage limit and forgo the subsidy for affected days, we agree that including that assumption is important to our simulation model results. Our simulation model represents the outcome when carriers choose the most profitable option available on each voyage, and vessel data reported by MSF carriers suggest that there are times when carriers would have the financial incentive to carry food aid above the limit and forfeit their subsidy payment for that voyage. The presentation of our simulation model results makes it clear that option is an important one in carriers being able to mitigate the impact of a tonnage limitation. If carriers never forgo the subsidy, the impacts of a limitation on MSF carriers would be greater. Neither carriers nor MARAD provided us a reason why they would not ever forgo a subsidy. The simulation model incorporates the likelihood that MSF carriers would face logistical constraints in managing food aid contracts to continue carrying food aid amounts near but under the limit. It reflects possibilities ranging from carriers being able to carry food aid exactly up to the limit amount—for example 2,500 tons—to not being able to carry any food aid on the share of voyages above the limit. We tested the sensitivity of our simulation results to the particular probability distribution assumed for this variable; and we found that if carriers are assumed to have less flexibility in managing food aid tonnage below a limit, the average values for the impacts would differ somewhat from the averages we reported. For example, for one alternative distribution assuming less flexibility, the average value of food aid tonnage that carriers would have to give up or lose the subsidy for the voyage increased from about 92,000 tons to about 109,000 tons. Similarly, our simulation model allows for the possibility that MSF carriers would be unable to replace any food aid above a limit with commercial or nonfood preference cargo. However, most MSF carriers reported that they are currently sailing near full capacity, with a range of capacity utilization rates that together average 90 percent. The simulation model relies on these reported capacity utilization rates to determine the most likely value for the share of food aid effectively above the limit that carriers might be able to replace with other cargo. However, the simulation model reflects a range of probabilities with respect to carriers being able to replace lost food aid cargo and achieve their current (based on the fiscal year 2001 to 2003 data we analyzed) average capacity utilization, and includes at one extreme the possibility that no lost food aid tonnage will be replaced. To the extent that these constraints strongly affect MSF carriers’ ability to respond to tonnage limits, then the high end of the range of possible results suggested by the simulation model should be considered. For example, if MSF carriers face significant logistical constraints to carrying food aid up to the limit, then, under a tonnage limit of 2,500 tons, they are more likely to have an annual 138,000 tons of food aid effectively above the limit, compared with the annual 61,000 tons of food aid effectively above the limit as estimated by the simulation’s low value results. In addition to the potential impacts of a tonnage limit that are suggested by the simulation model under certain assumptions, there are potential structural constraints we were not able to reliably quantify and include in the model. One example is the potential impact on MSF carriers’ total tonnage and revenues if a tonnage limit were to jeopardize their vessel sharing agreements. As we stated in the report, these types of structural constraints could challenge MSF carriers in being able to effectively respond to a tonnage limit at any level. We are sending copies of this report to appropriate congressional committees, the Secretaries of USDA, DOD, and DOT, and the Administrator of USAID. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Additional contacts and staff acknowledgments are listed in appendix VII. In a legislative mandate in section 3535 of the National Defense Authorization Act for Fiscal Year 2004 (P.L. 108-136), Congress directed us to review the impact of placing a tonnage limitation on transportation by the Maritime Security Fleet (MSF) of cargo preference food aid and to report to the Chairman and Ranking Minority Member of the House and Senate Committees on Armed Services and the Senate Committee on Commerce, Science, and Transportation. As discussed with Committee representatives, we have focused on answering the following questions: (1) how the cargo preference and Maritime Security Programs are designed to meet their objectives and who participates in them; (2) what the nature and extent are of MSF and non-MSF carrier participation in the food aid program; (3) how establishing a bagged cargo preference tonnage limitation on MSF vessels would be expected to affect the MSF, other U.S.- flag ships, the cargo preference food aid program, and the ports servicing these ships. To examine how the cargo preference and Maritime Security Programs are designed to meet their objectives and who participates in them, we reviewed documents, relevant legislation, regulations, and data pertaining to the cargo preference and Maritime Security Programs from the Maritime Administration (MARAD) and Department of Defense (DOD), as well as our prior studies and those done by the Congressional Research Service. We also obtained and analyzed MSF and cargo preference vessel data and food aid shipment participation data from MARAD and the Department of Agriculture (USDA) for fiscal years 1999 to 2003. We examined the data for their reliability and appropriateness for our purposes through electronic testing of the data, verification of the data against other sources, and interviews with agency officials that manage the data. We found the data to be sufficiently reliable to represent participation by MSF and non-MSF vessels and carriers in transporting food aid shipments. In addition, we interviewed agency officials at MARAD, DOD, USDA, and the Agency for International Development (USAID), as well as representatives of three maritime trade associations. We also conducted structured interviews with representatives of 15 carriers that transported the majority of cargo preference food aid, including 5 MSF and 10 non-MSF carriers. To determine the nature and extent of MSF and non-MSF carrier participation and competition in the food aid program, we gathered and analyzed food aid shipment data from USDA and USAID for fiscal years 1999 to 2003. We examined the data for their reliability and appropriateness for our purposes and found them sufficiently reliable to represent MSF and non-MSF carrier participation and competition in the food aid program. We also interviewed USDA, USAID, MARAD, DOD, and maritime trade association officials, including company representatives from 5 MSF and 10 non-MSF carriers. To determine whether bagged cargo has accounted for an increasing share of food aid shipments, we obtained and analyzed USDA food aid procurement data from fiscal years 1996 to 2003. We examined the data for their reliability and appropriateness for our purposes through electronic testing of the data, verification of the data against other sources, and interviews with agency officials that manage the data. We found them sufficiently reliable to confirm that an increasing share of food aid was shipped as bagged cargo from 1999 to 2003. In addition, we reviewed agency reports that discussed food aid program activities and trends, and conducted interviews with USDA and USAID officials. To examine the process by which agencies award food aid shipments to MSF and non-MSF carriers, we obtained and reviewed USDA, USAID, and MARAD directives and regulations governing the ocean transportation of food aid cargo and also reviewed applicable legislation. We also conducted interviews with USDA and USAID officials responsible for awarding food aid shipments in accordance with cargo preference requirements. To identify the U.S. ports that handled the largest tonnages of food aid cargo shipped by MSF and non-MSF carriers, we analyzed USDA food aid shipment data. To gain additional perspectives on how MSF and non-MSF carriers handled and transported this cargo in preparation for export shipment, we interviewed port officials from 8 major food aid ports, as well as 15 MSF and non-MSF carrier representatives. To examine how establishing a bagged cargo preference tonnage limitation on MSF vessels would potentially affect MSF and other U.S.-flag ships, we obtained and analyzed USDA food aid shipment data for fiscal years 1999 to 2003. We analyzed the tonnage carried and revenues earned for each MSF vessel voyage that carried food aid above potential limits of 2,500, 5,000, and 7,500 tons. To illustrate how carriers might respond to a tonnage limit, we obtained operating and revenue information from the five MSF carriers on each of their vessels from fiscal years 2001 to 2003. To account for variation in the values of our estimates, we performed a Monte Carlo simulation that varied the impact model approximately 20,000 times from probability distributions characterizing possible values for variables, such as the percent of food aid above the limit that carriers replace with other cargo, the freight rate for other cargo, and the cost differential between food aid and other cargo. This simulation resulted in a range of estimates, under certain assumptions, for the likely total decline in MSF food aid tonnage and net revenues on an annual basis. A technical discussion of the simulation model and the results at a 5,000-ton limitation is provided in appendix II. We examined USDA’s food aid shipment data and carrier’s vessel estimates for their reliability and appropriateness for our purposes. For USDA’s data, we performed electronic testing of the data, verification of the data against other sources, and interviews with agency officials that manage the data. Although we were able to do only limited verification of the self-reported data from carriers, we found both sources to be sufficiently reliable to inform our simulation model. In addition, we supplemented our simulation results with information that both MSF and non-MSF carriers provided in interviews pertaining to any structural constraints they may face in responding to a tonnage limitation. To examine how establishing a bagged cargo preference tonnage limitation on MSF vessels would potentially affect the program agencies, we reviewed the current extent of data collection and procedures for tracking food aid shipments to see if additional administrative burdens would be entailed. We also interviewed agency officials at USDA, USAID, MARAD, and DOD. To examine how establishing a bagged cargo preference tonnage limitation on MSF vessels would potentially affect the ports that service food aid shipments by MSF and non-MSF carriers, we analyzed food aid shipment data from USDA that identified the ports used for each shipment for fiscal years 1999 to 2003. We also conducted telephone interviews with representatives of eight major food aid ports (Charleston, South Carolina; Chicago, Illinois; Houston and Jacintoport, Texas; Lake Charles and New Orleans, Louisiana; Norfolk, Virginia; and Seattle, Washington) to obtain additional information, including their assessment of the potential impact of a limitation on their port. We performed our work from February through August 2004 in accordance with generally accepted government auditing standards. This appendix describes the data and methodology that we used to analyze the impact of a bagged tonnage limitation on MSF and presents some additional estimates not contained in the letter portion of this report. This simulation analysis is based on certain assumptions regarding carrier options and responses and makes use of food aid data from agencies, reported vessel revenue and cost estimates for recent years, and information from interviews about the food aid industry. The three potential carrier responses incorporated into our model include an MSF vessel’s potential ability to continue carrying some food aid on affected voyages, replace some food aid with other cargo, and forfeit its subsidy for food aid contracts that are sufficiently profitable. Our methodology illustrates that, depending on the degree to which these options exist for MSF, carriers may reduce the overall tonnage and net revenue impacts of a limit. These estimates reflect some probability that carriers will face constraints in how they respond to limits, however, there is uncertainty associated with some of the assumptions of the model. Carriers may face additional logistical or structural constraints relating to program requirements or company characteristics that would limit their responses to a greater degree than our simulation reflects. Moreover, future market conditions may differ from those reflected in recent data, such that our analysis could not be used as a forecast. Thus, while our simulation can help decision-makers understand important factors that should be taken into account when considering tonnage limits—and develops a range of impact estimates based on recent years that reflect those factors—actual impacts could be near the outer limits of or fall outside our estimated ranges. To analyze the impacts of a tonnage limit on MSF vessels, we collected data on key tonnage and revenue variables from a variety of sources for fiscal years 2001 to 2003. To create a list of MSF vessel voyages that carried food aid tonnage above a potential limit, we examined USDA’s food aid shipment data and identified 123 vessel voyages. We paired this voyage list with estimates we collected from the MSF carriers on each vessel’s annual costs and annual tonnage and freight rates for commercial cargo, food aid cargo, and nonfood aid preference cargo. We also calculated the subsidy per voyage each MSF vessel earns, based on the number of days per voyage in that vessel’s regularly scheduled outbound service. To estimate a range of impacts for a tonnage limitation under certain assumptions, we explicitly consider three options that MSF carriers may potentially have in responding to such a limit. For affected voyages, an MSF carrier may be able to (1) continue carrying some food aid up to the limit, (2) replace some food aid above a limit with other cargo, and (3) continue carrying food aid above a limit if it were more profitable than the subsidy payment for that voyage plus any net revenue from replacing the food aid with other cargo. As discussed below, we rely on assumptions about the degree to which carriers may be able to respond in these three ways to assign probabilities to a probability distribution. Table 4 shows that we use the following five probability distributions to calculate a range of impacts for MSF carriers under a tonnage limitation. Each of these distributions is discussed further in the text following table 4. 1. USDA and USAID reported that the food aid tonnage on a voyage often comprises multiple food aid contracts such that carriers may be able to continue to bid only on those shipments providing tonnage under the limit. However, since food aid contract terms vary, the degree to which MSF carriers can maximize carrying food aid up to the limit will also vary. As a result, we include in our simulation an assumption that carriers will most likely be able to carry tonnage up to the level of the limit (based on profit maximization principles), but we use a probability distribution that includes a range of values for the amount of food aid that the vessel could potentially lose—otherwise stated as the amount of food aid effectively above the limit. For example, at a limit of 5,000 tons, for an MSF voyage with 6,000 tons of food aid, only 1,000 tons of food aid could be effectively above the limit. However, if MSF carriers had less flexibility in managing food aid tonnage, up to the entire 6,000 tons could be effectively above the tonnage limit. 2. We asked carriers to provide information about their current capacity utilization as an indication of the most likely value for the share of food aid they may be able to replace. Reported capacity utilization rates were high for all carriers with a range of values averaging 90 percent. However, we note the uncertainty regarding how close to the reported capacity utilization rates carriers would be able to come through replacing lost food aid tonnage with other cargo. We use a probability distribution to incorporate this uncertainty that allows for the possibility that carriers would not be able to replace any lost food aid with other cargo. 3. We asked carriers to provide their average freight rates for commercial cargo and nonfood aid preference cargo as an indication of the most likely freight rate they may receive on replacement cargo. Using annually weighted information from the five MSF carriers on all of their vessel voyages, we calculated a standard deviation and used this variation to apply a range of values for each voyage to reflect likely freight rates for other cargo, subject to certain constraints. 4. If MSF carriers replace food aid above a limit with other cargo, they are also likely to experience a change in costs. We found that it is generally more costly for the MSF to carry a ton of food aid than it is to carry a ton of commercial cargo. Based on interviews with carriers and industry experts, we incorporate, across the model, a range of values for this additional food aid cost differential around a most likely estimate of $30 per ton. If carriers alter the total tonnage on a vessel voyage, their costs will also vary. We do not have data pertaining to the percentage of MSF total vessel costs that vary with tonnage levels. Based on broad estimates from MARAD that around 40 percent of vessel costs are for overhead or fixed items, we consider a wide range of values around the remaining 60 percent of total costs. To incorporate these five assumptions into our impact estimates, we performed a Monte Carlo simulation. In this simulation, values were randomly drawn 20,000 times from probability distributions characterizing possible values for impact variables discussed above and listed in table 4. Under assumptions described by probability distributions selected for these impact variables, the simulation yields estimates for the total decline in both MSF food aid tonnage and net revenues on an annual basis. Using our simulation model, we analyzed the tonnage and net revenue impacts on MSF of a food aid limit at 5,000 and 2,500 tons. Results for a 2,500-ton limit are presented in the letter portion of this report while table 5 provides the results for a 5,000-ton limit. As shown in table 5, the estimated decline in MSF food aid tonnage under this limitation ranges from around 3,000 to 13,000 tons, a decline significantly less than the total tonnage on voyages affected by the limit—46,000 tons. In this analysis, carriers are estimated to replace food aid above the limit with 1,000 to 11,000 tons of other cargo and continue to carry 5,000 tons to 31,000 tons of food aid above the tonnage limit. The total decline in net revenues for this group would range from roughly $500,000 to $1 million. According to this analysis, the impact estimates for a limit at both 2,500 and 5,000 tons are influenced most by variations in assumptions pertaining to the amount of food aid effectively above the limit for each voyage, and the share of food aid above the limit that carriers may be able to replace with other cargo. A higher value for the amount of food aid effectively above the limit tends to increase the estimate for the total decline in MSF net revenues because MSF carriers are less able to maximize carrying food aid up to the limit. A higher value for the share of food aid above the limit that carriers might replace with other cargo tends to lower the estimate for the total decline in MSF net revenues because carriers are earning more money from replacement cargo. However, this assumption tends to raise the estimate for the total decline in MSF food aid tonnage carried because it makes the option of forfeiting the subsidy payment to carry food aid above the limit less profitable. This simulation model has certain limitations resulting from two broad areas of uncertainty not incorporated into the estimates. First, MSF carriers may face logistical or structural constraints as imposed by program requirements or company characteristics that would alter their response to a tonnage limit in ways our simulation does not reflect. For example, if an MSF carrier decides never to carry food aid above a limit—- even if it is profitable to do so, net of a forfeited subsidy payment—then the total food aid tonnage available to the non-MSF carriers would also increase. In addition, vessel financial data are based on estimates of annual averages and may not incorporate the entire range of variation for every variable. One example might include a higher food aid cost differential associated with an emergency food aid shipment to a remote area with particularly expensive contract terms. Second, our model relies on data from fiscal years 2001 to 2003, which may not be an accurate indicator of future food aid program levels, future food aid program requirements, or the future number of U.S.-flag vessels participating in cargo preference. For example, if future food aid program levels decline, then the overall tonnage and revenue changes from a shift in the MSF’s food aid market share would also likely decline. Therefore, to the extent that our model’s assumptions do not adequately reflect these two broad areas of uncertainty, the impacts of a tonnage limit could lie outside our estimated ranges. The Maritime Security Fleet currently comprises 47 vessels operated by 12 companies. Table 6 provides a profile of the vessels participating in the Maritime Security Fleet, as of December 2, 2003. In addition to those named above, Jay Cherlow, Martin De Alteriis, Jamie McDonald, Eric Petersen, Kendall Schaefer, Richard Seldin, and Daniel Williams made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Food aid cargo must generally be carried on U.S.-flag ships under requirements set by the cargo preference program. Two groups of carriers compete for this cargo: (1) those that participate in the Maritime Security Program and receive an annual government subsidy--generally liners operating on scheduled routes and (2) those that do not--generally carriers operating on a charter basis. Congress directed GAO to study (1) how the cargo preference and Maritime Security programs are designed and who participates;(2) the nature and extent of MSF and non-MSF carrier participation and competition in the food aid program; and (3) how a tonnage limitation on bagged preference cargo for MSF vessels could affect MSF, other U.S.-flag ships, the cargo preference food aid program, and the ports servicing these ships. The cargo preference program and the Maritime Security Program provide incentives to retain privately owned U.S.-flag ships and their U.S. citizen mariners for commercial and national defense purposes. The cargo preference program is open to all U.S.-flagged vessels, while the Maritime Security Fleet (MSF) subsidy is only available to certain militarily useful vessels. Of the 47 ships currently in the MSF, 37 have participated in cargo preference food aid shipments. MSF and non-MSF carriers compete for food aid shipped as bagged cargo, which averaged 33 percent of food aid shipments by tonnage from fiscal years 1999 to 2003. There is no competition for bulk food aid shipments because MSF carriers do not carry bulk cargo. Changes in food aid spending have contributed to a shift from bulk to bagged cargo and increased reliance on bagged cargo by some non-MSF carriers. From 1999 to 2003, MSF carriers shipped about 45 percent and non-MSF carriers 55 percent of bagged food aid cargo. Competition between MSF and non-MSF carriers for bagged food aid is affected by certain cargo preference requirements. Establishing a tonnage limitation on MSF vessels would likely reduce their share of food aid shipments, but the extent would depend on factors such as the level of the limit and the options MSF carriers have in responding to it. We examined three proposed limits and found that the percentage of food aid voyages carrying more than the proposed limit rises from 3 percent with a limit of 7,500 tons to 19 percent above 2,500 tons, according to fiscal year 2001 to 2003 data. The actual impact on MSF carriers will be smaller if they are able to (1) carry some food aid up to the limit, (2) replace some food aid above the limit with other cargo, and/or (3) elect to carry food aid even without the subsidy. Food aid agencies are concerned about the impacts of a tonnage limit, including increased delays in providing food aid, administrative burdens, and higher shipping costs. Major ports would generally experience a limited overall impact of a tonnage limitation, but specific food aid terminals could be affected.
|
To carry out its responsibilities under the nation’s environmental laws, EPA conducts an array of activities, such as promulgating regulations; issuing and denying permits; approving state programs; and issuing enforcement orders, plans, and other documents. Many of these activities may be subject to legal challenge. Generally, the federal government has immunity from lawsuits, but federal laws authorize three types of suits related to EPA’s implementation of environmental laws. First, most of the major environmental statutes include “citizen suit” provisions authorizing citizens—including individuals, associations, businesses, and state and local governments—to sue EPA when the agency fails to perform an action mandated by law. These suits are often referred to as “agency-forcing” or “deadline” suits. Second, the major environmental statutes typically include judicial review provisions authorizing citizens to challenge certain EPA actions, such as promulgating regulations or issuing permits. Third, the Administrative Procedure Act authorizes challenges to certain agency actions that are considered final actions, such as rulemakings and decisions on permit applications. As a result, even if a particular environmental statute does not authorize a challenge against EPA for a final decision or regulation, the Administrative Procedure Act may do so. Table 1 lists key environmental laws under which EPA takes actions—or that govern EPA actions—that may be subject to challenge in court. Supporters of provisions allowing legal challenges to actions of the federal government assert that they provide a check on the authority of federal agencies as they carry out—or fail to carry out—their duties. For example, in passing the 1977 Clean Air Act amendments, a key sponsor indicated that authorizing citizens to sue agencies to compel them to carry out their duties is integral to a democratic society. According to others, citizen suits against government agencies have achieved benefits, such as ensuring the implementation of congressional directives or accelerating regulatory programs. Similarly, the Administrative Procedure Act arose out of the expansion of the federal government in the New Deal, with concerns about agencies’ adjudicative powers, their exercise of delegated legislative power by rulemakings, and the scope of review of agency administrative action by courts. A lawsuit challenging EPA’s failure to act may begin when the aggrieved party sends EPA a notice of intent to sue, if required, while a lawsuit challenging a final EPA action begins when a complaint is filed in court. Before EPA takes final action, the public or affected parties generally have opportunities to provide comments and information to the agency. In addition, administrative appeals procedures are available—and in many cases required—to challenge EPA’s final action without filing a lawsuit in a court. For example, citizens can appeal an EPA air emission permit to the agency’s Environmental Appeals Board. These administrative processes provide aggrieved parties with a forum that may be faster and less costly than a court. If a party decides to pursue a case, the litigation process generally involves filing of a complaint, formal initiation of the litigation; motions to the court before trial, such as asking for dismissal of the case; and hearings and court decisions. Throughout this process, the parties to the litigation can decide to reach a settlement. Negotiations between the aggrieved party and EPA may occur anytime after the agency action, at any point during active litigation, and even after judgment. A common remedy sought in litigation against EPA under the statutes listed in table 1 is for the court to set aside an EPA regulation or permit decision and to require EPA to reconsider that regulation or permit decision. In the United States, parties involved in litigation generally pay their own attorney fees and costs, except in instances in which Congress has provided exceptions for policy reasons, such as to encourage citizens to bring suits to enforce the law. In these instances, as well as some common-law exceptions, a prevailing plaintiff may seek award of its attorney fees and court costs from the losing party. Many of the environmental statutes in table 1 contain such exceptions authorizing courts to award fees, which, according to Justice, include awards against the federal government. In 1980, Congress enacted the Equal Access to Justice Act (EAJA) authorizing the award of attorney fees and costs to parties that prevail in certain lawsuits against the federal government; the payments are made from Treasury’s Judgment Fund and agency appropriations. While the federal government was already subject to some of these exceptions in environmental statutes, before EAJA was enacted, the federal government in many other cases was not subject to these exceptions and therefore was not authorized to make payments to prevailing parties. As the 1980 conference committee report for EAJA explains, the act’s premise is that individuals, corporations, partnerships, and labor and other organizations do not seek review of or defend against unreasonable government actions because of the expense involved, as well as a disparity in expertise and resources between the government and the individual or organization involved. For those cases brought under statutes that do not make the federal government subject to pay fees and costs, EAJA thus allows payment of the attorney fees and other costs if the organizations sought review of a government action and prevailed. (See app. II for a detailed description of the act.) Except as otherwise specifically provided by law, EAJA authorizes the award of the following costs to be paid from Treasury’s Judgment Fund or an agency’s appropriations, as indicated: Court costs of prevailing parties against the United States in any civil action. These costs may include fees for the clerk and marshal, reporter, printing, witnesses, copies, docket fees, and interpreters and court-appointed experts and may include an amount equal to the filing fees. Payment of costs made under this section generally are paid by Treasury’s Judgment Fund. Reasonable attorney fees and expenses of a prevailing party to the same extent as any other party where a statutory or common-law exception provides for award of fees to a prevailing party. Regarding the environmental statutes in table 1, according to Justice, many of the relevant provisions under which EPA may be sued provide for award of such fees against EPA, independent of EAJA. Nevertheless, EAJA makes EPA subject to fee awards under all the environmental statutes’ provisions authorizing courts to award attorney fees and expenses. Therefore, in many—but not all—of the environmental lawsuits against EPA, a court may award attorney fees and expenses of a prevailing party against the agency, independently or as a result of EAJA section 2412(b). Payment of awards made under this section generally are paid by Treasury’s Judgment Fund. Attorney fees and expenses of a prevailing party in most other cases—that is, when the relevant statute does not authorize courts to award attorney fees and expenses, and no common-law exception applies—unless the court finds that the position of the United States was substantially justified or that special circumstances make an award unjust. Two laws listed in table 1—the Federal Insecticide, Fungicide, and Rodenticide Act and the Federal Food, Drug, and Cosmetic Act—as well as some individual provisions of other statutes, do not authorize payment of fees to prevailing parties. As a result, in cases brought against EPA under these statutes and provisions, courts award payment of fees under EAJA section 2412(d). Payment of awards made under this section is generally made from agency appropriations. In addition, to settle a case, the government may agree to pay a plaintiff court costs and attorney fees and expenses. Payments made in connection with settlements are paid in the same manner as a court award for the case. Some in Congress have expressed concerns that the use of taxpayer funds to make EAJA payments depletes limited funding; these individuals have called for transparency of these expenditures. Originally, EAJA provided for governmentwide reporting on its use and cost. For judicial proceedings, EAJA required the Director of the Administrative Office of the U. S. Courts to report annually to Congress on EAJA court activity, including the number, nature, and amounts of awards; claims involved; and any other relevant information deemed necessary to aid Congress in evaluating the scope and effect of awards under the act. The responsibility for this reporting was transferred to the Attorney General in 1992. In addition, EAJA required the Chairman of the Administrative Conference of the United States to submit an EAJA report annually to Congress on administratively awarded fees and expenses. Then, in December 1995, the Federal Reports Elimination and Sunset Act of 1995 repealed the Attorney General’s reporting requirement for fees and expenses awarded under EAJA and also discontinued reporting of governmentwide administrative awards of fees and costs under EAJA after fiscal year 1994. We have previously reported certain governmentwide EAJA data, as well as data focused on selected agencies. In 1995, we reported data on the number of cases and amounts of awarded plaintiff attorney fees exceeding $10,000 against nine federal agencies for cases closed during fiscal years 1993 and 1994. In 1998, we provided information on the history of EAJA, the extent to which one provision of the act was used governmentwide from 1982 to 1994, and the provision’s use by the Department of Labor and other agencies. The governmentwide data for fiscal year 1994 showed, among other things, that the Departments of Health and Human Services and of Veterans Affairs accounted for most EAJA payments in court proceedings, under the provision that applies when the substantive law does not authorize award of attorney fees and costs. The number of environmental litigation cases brought against EPA each year from fiscal year 1995 through fiscal year 2010 varied but showed no discernible trend. According to the stakeholders we interviewed, a number of factors—particularly presidential administration, the passage of new regulations or amendments to laws, or EPA’s failure to meet statutory deadlines—affect the number of environmental litigation cases each year and the type of plaintiffs who bring them. The number of environmental litigation cases brought against EPA each year from fiscal year 1995 through fiscal year 2010 varied but did not change systematically over time. The average number of new cases filed each year was 155, ranging from a low of 102 new cases filed in fiscal year 2008 to a high of 216 cases filed in fiscal year 1997 (see fig. 1). From fiscal year 1995 through fiscal year 2001, the average number of new cases was 170; from fiscal year 2002 through fiscal year 2010, the average number of new cases was 144, a difference of 26 fewer new cases on average. The average number of new cases in these periods varied from the long-term average of 155 cases by less than 10 percent. In all, Justice defended EPA in nearly 2,500 cases from fiscal year 1995 through fiscal year 2010. The greatest number of cases was filed in fiscal year 1997, which, according to a Justice official, may be explained by the fact that EPA revised its national ambient air quality standards for ozone and particulate matter in 1997, which may have caused some groups to sue. In addition, according to the same official, in 1997 EPA implemented a “credible evidence” rule, which also was the subject of additional lawsuits. The fewest cases against EPA (102) were filed in fiscal year 2008, and Justice officials were unable to pinpoint any specific reasons for the decline. In fiscal years 2009 and 2010, the caseload increased. A Justice official said that it is difficult to know why the number of cases might increase because litigants sue for different reasons, and some time might elapse between an EPA action and a group’s decision to sue. As shown in figure 2, most cases against EPA were brought under the Clean Air Act, which represented about 59 percent of the approximately 2,500 cases that were filed during the 16-year period of our review. Cases filed under the Clean Water Act represented the next largest group of cases (20 percent), and the Resource Conservation and Recovery Act represented the third largest group of cases (6 percent). The lead plaintiffs filing cases against EPA during the 16-year period fit into several categories. The largest category comprised trade associations (25 percent), followed by private companies (23 percent), local environmental groups and citizens’ groups (16 percent), and national environmental groups (14 percent). Individuals, states and territories, municipal and regional government entities, unions and workers’ groups, tribes, universities, and a small number of others we could not identify made up the remaining plaintiffs (see table 2). Appendix I gives more information about our method of developing these categories and classifying cases. According to the stakeholders we interviewed, a number of factors— particularly a change in presidential administration, the passage of regulations or amendments to laws, and EPA’s failure to meet statutory deadlines—affect plaintiffs’ decisions to bring litigation against EPA. Stakeholders did not identify any single factor driving litigation, but instead, attributed litigation to a combination of different factors. According to most of the stakeholders we spoke with, a presidential administration is an important factor in groups’ decisions to bring suits against EPA. Some stakeholders suggested that a new administration viewed as favoring less enforcement could spur lawsuits from environmental groups in response, or industry groups could sue to delay or prevent the administration’s actions. For example, a presidential administration that seems to favor less enforcement of requirements under environmental statutes could motivate increased litigation. Other stakeholders suggested that if an administration is viewed as favoring greater enforcement of rules, industry may respond to increased activity by bringing suit against EPA to delay or prevent the administration’s actions, while certain environmental groups may bring suit with the aim of ensuring that required agency actions are completed during an administration they perceive as having views similar to the groups’ own. Most of the stakeholders also suggested that the development of new EPA regulations or the passage of amendments to environmental statutes may lead parties to file suit against the new regulations or against EPA’s implementation of the amendments. When EPA issues new or amended regulations, parties may take issue with the specific new provisions. One stakeholder noted that an industry interested in a particular issue may become involved in litigation related to the development of regulations because it wishes to be part of the regulatory process and negotiations that result in a mutually acceptable rule. In addition, several of the stakeholders noted that if EPA does not meet its statutory deadlines, organizations or individuals might sue to enforce the deadline. In such suits, interested parties seek a court order or a settlement requiring EPA to implement its statutory responsibilities. In addition, some stakeholders said that some statutes are broadly written or contain vague language or definitions; such statutes are more likely to be litigated because different parties want to define the terms and set precedent for future cases. For example, a stakeholder representing states’ perspectives said that under the Clean Water Act, an area of frequent litigation is the definition of “navigable waters.” Through lawsuits, litigants have argued about whether a certain body of water comes within the definition and can therefore be regulated under the act. A few stakeholders identified two other factors that may affect litigation: (1) the maturity of the statute in question and (2) the use of existing laws to address new problems. The stakeholders said that the focus of litigation over a particular statute changes with time, as early cases may set precedents that will affect how the statute is implemented later. Also, a representative of an environmental organization said that because no major rewriting of any environmental statutes has occurred in 20 years, plaintiffs are increasingly bringing suits, and judges are making decisions, about how to interpret statutes in situations for which rules were not explicitly written. For example, parties disagree over whether the Clean Air Act should be used to regulate greenhouse gases, such as carbon dioxide, methane, and nitrous oxide—substances that some stakeholders say the act was not originally designed to regulate. Data available from Justice, Treasury, and EPA show that the costs associated with environmental litigation cases against EPA have varied from year to year with no discernible trend. Justice’s Environment and Natural Resources Division spent a total of about $43 million to defend EPA in these cases from fiscal year 1998 to fiscal year 2010, averaging $3.3 million per year. Some cost data from the Department of Justice are not available, however, in part because Justice’s Environment and Natural Resources Division and the U.S. Attorneys’ Offices do not have a standard approach for maintaining key data for environmental litigation cases. For example, while the Environment and Natural Resources Division tracks attorney hours by case, the U.S. Attorneys’ Offices do not. Treasury paid a total of about $14.2 million to prevailing plaintiffs for attorney fees and costs related to these cases from fiscal years 2003 through 2010, averaging about $1.8 million per year. EPA paid a total of $1.4 million from fiscal year 2006 through fiscal year 2010 in attorney fees and costs, averaging about $280,000 per year. Our analysis of data from Justice’s Environment and Natural Resources Division found that from fiscal year 1998 through fiscal year 2010, Justice spent at least $3.3 million on average annually to defend EPA against environmental litigation, for a total of $43 million. (The Environment and Natural Resources Division fiscal year 2010 budget was $110 million.) The U.S. Attorneys’ Offices’ database, however, does not contain information on attorney hours worked by case, which meant that we could not include the time these attorneys spent on each case in our estimate. According to Justice officials, however, the $3.3 million average per year represents the majority of Justice’s time spent defending EPA each year, given that the U.S. Attorneys’ Offices handle a small number of environment-related cases each year. Overall, as shown in figure 3, annual costs increased by an average of about 3 percent each year from fiscal year 1998 through 2010, ranging from a low of $2.7 million in fiscal year 1998 to a high of $3.9 million in fiscal year 2007. Justice maintains separate, decentralized databases containing environmental case information and does not have a standard approach for collecting and entering data on these cases. Without a standard approach, it is it difficult to identify and summarize the full set of environmental litigation cases and costs managed by the department agencywide. Specifically, the department’s Environment and Natural Resources Division and the U.S. Attorneys’ Offices maintain different case management systems, and these systems do not use the same unique number to identify cases, making it possible to track cases within each component but not to align and merge cases from the two components. Because the U.S. Attorneys’ Offices may assist the Environment and Natural Resources Division in certain case activities, a single case may appear in both systems, each with a different unique identifier. The only piece of data in both databases that can in practice be used to identify cases managed by both components is the court number, yet neither system has adopted the standard court number format used in the federal judiciary’s Public Access to Court Electronic Records system, an electronic service that allows public access to case and docket information from federal appellate, district, and bankruptcy courts. According to an official of the Executive Office for U.S. Attorneys, the individual U.S. Attorneys’ Offices may enter the court numbers in the specific formats used by the courts in their individual jurisdictions, although the official also said that there is no formal or written guidance for proper format of court numbers. Without such standard identifying numbers, it is difficult to identify a full and unduplicated list of environmental litigation cases and to derive descriptive statistics on costs, statute, or opposing parties. Because the department’s data on environmental litigation cannot be reliably merged or aggregated to provide summary information on environmental cases, we had to use an iterative electronic and manual process to compile data from the two systems to conduct our review and identify the full set of environmental litigation cases and associated costs. Moreover, not only are the two Justice databases separate, but the two agency components do not collect the same types of data on environmental cases. Specifically, the U.S. Attorneys’ database does not collect data on the number of hours attorneys spend on an individual case or information on the statute under which a case is filed. As a result, it is impossible to gather complete data on all environmental litigation cases and costs from these databases. For example, we were unable to calculate the total number of hours that Justice attorneys worked on environmental cases—and hence, total costs of attorney time—because the U.S. Attorneys’ time is not tracked by case. By employing an iterative electronic and manual process to standardize the court numbers associated with all cases and matching cases from the two systems by these numbers, we were ultimately able to merge the two sets of data on environmental litigation cases managed by Justice’s Environment and Natural Resources Division and the U.S. Attorneys’ Offices for purposes of this report. Justice officials said, however, that they do not plan to change their approach to managing the data because they use the data in each system to manage individual cases, not to identify and summarize agencywide data on cases or trends. Officials said that their systems were designed for internal management purposes and not agencywide statistical tracking. Furthermore, while funds are spent to maintain the systems, officials indicated that the systems are old, and adding data fields or otherwise making changes to the systems may be technically infeasible or too costly. Justice officials said that the department previously sought to develop and implement a single case management system to gather common data agencywide, but the project was terminated in 2010 after a 2009 Office of the Inspector General report found that the project was more than 2 years behind the initial estimated completion date and that the project’s total cost would be more than $18 million over budget. Because the two Justice components are not regularly required to merge and report their data in a systematic way, we are not making a recommendation regarding these data or systems. In addition to Justice’s costs of defending EPA, costs of litigation include payment of attorney fees and court costs to plaintiffs who prevail in lawsuits against EPA. As part of the payment process, Justice negotiated payment amounts with prevailing parties before finalizing the amount to be paid. For most of the claims under the 10 environmental statutes in this report, payments to successful plaintiffs were made from Treasury’s Judgment Fund. Justice defended approximately 2,500 EPA-related cases filed from fiscal year 1995 through fiscal year 2010, but the number of environmental litigation cases from which plaintiffs received payments was small, representing about 8 percent of all cases. In addition, EPA made a small number of payments for attorney fees and costs under the appropriate provision of EAJA. From fiscal year 2003—the first year for which Treasury’s Judgment Fund data are available—through fiscal year 2010, Treasury made, on average, 26 payments totaling $1.8 million per year for EPA-related environmental cases. The average Judgment Fund payment was $68,600 per payment. Treasury paid a total of about $14.2 million out of its Judgment Fund to prevailing plaintiffs for attorney fees and costs related to these cases (see fig. 4). The largest share of monies (46 percent) were made in cases filed by national environmental groups, followed by monies paid for cases filed by local environmental and citizens’ groups (29 percent). The payments ranged from as little as $145, to the administrator of a law school clinic for a Clean Air Act suit, to as much as $720,000, to a private law firm for a Clean Water Act suit. According to Justice officials, payments are made either to the plaintiff or to the plaintiff’s attorneys. Appendix III lists payments from Treasury’s Judgment Fund for the environmental statutes in our review. Fluctuations in annual payments may occur, according to Justice officials, because payments to plaintiffs can be made several years after a case is completed, in part because Justice attempts to negotiate settlements of attorney fee claims before seeking a determination by the courts regarding claims that cannot be settled. Officials said that through this process of negotiation, the department pays plaintiffs, in the majority of cases, an amount that is much lower than requested. To determine attorney fees for each case, Justice considers, among other things, documentation by the plaintiff, including such factors as (1) the number of hours the plaintiff’s attorneys spent on the case, which must be documented by the plaintiff; (2) the job description of the person spending time on the case (e.g., the costs for a paralegal and a lead counsel would be very different); (3) the specific tasks performed; and (4) applicable law in the jurisdiction, such as limits on hourly attorney fees or total amounts that courts have approved in the past. Although Justice may conclude that the hours are justified, fees may still be denied because of court precedent. Each time fees are negotiated, depending on the amount, Justice’s Assistant Attorney General or the relevant Environment and Natural Resources Division Section Chief must approve the result, pursuant to applicable regulations and delegations. From fiscal year 2006—the first year for which EPA specifically tracked the payments by type of claim—through fiscal year 2010, EPA made 14 payments, totaling $1.4 million, for attorney fees and other costs under EAJA. EPA made an average of 2.8 payments per fiscal year, with an average payment of about $100,000. On average, EPA paid about $280,000 per year. The largest share of the monies (61 percent) went to payments for claims filed by local environmental groups, followed by monies (23 percent) for claims filed by national environmental groups. Although workers’ groups filed comparatively few lawsuits, one such group did receive a single payment of $230,000 in fiscal year 2010 (see fig. 5). The EPA payments ranged from $1,179, which was paid to an individual for a Clean Water Act suit in 2010, to $472,967, which was paid to an environmental group for two Clean Water Act suits, including one appeal. Appendix III contains a list of payments by payee. We provided a draft of this report to EPA, Justice, and Treasury for their review and comment. EPA did not provide comments, and Justice and Treasury had technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Attorney General of the United States, the Secretary of the Treasury, the Administrator of EPA, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report describes (1) trends, if any, in environmental lawsuits against the Environmental Protection Agency (EPA) from fiscal year 1995 through fiscal year 2010, as well as stakeholders’ views of factors affecting any trends, and (2) Justice’s recent costs for representing EPA in defensive environmental lawsuits and the federal government’s recent payments to plaintiffs. To examine the changes over time to EPA’s environmental litigation caseload, we obtained and analyzed data on lawsuits filed against the agency from databases maintained by two components within the Department of Justice—the Case Management System database maintained by Justice’s Environment and Natural Resources Division and the Legal Information Office Network System database maintained by Justice’s U.S. Attorneys’ Offices. We obtained and analyzed data from these databases for lawsuits: filed in federal court from fiscal year 1995 through fiscal year 2010 (Oct. 1, 1994, through Sept. 30, 2010); in which EPA was the lead defendant, excluding cases in which EPA was a defendant but the lead defendant identified by Justice was another agency, such as the U.S. Army Corps of Engineers; brought under 10 major environmental statutes implemented by or applying to EPA, including the Clean Air Act; Clean Water Act; Safe Drinking Water Act; Resource Conservation and Recovery Act; Comprehensive Environmental Response, Compensation, and Liability Act (Superfund); Emergency Planning and Community Right- to-Know Act; Federal Insecticide, Fungicide, and Rodenticide Act; Federal Food, Drug, and Cosmetic Act; Toxic Substances Control Act; and the Endangered Species Act as it applies to EPA. We excluded cases filed under the National Environmental Policy Act (NEPA) because these cases are managed by a number of sections within the Environment and Natural Resources Division, and because, according to Justice officials, few cases are filed under NEPA with EPA as the lead defendant. We also excluded the Freedom of Information Act, Discrimination in Federal Employment Act, Fair Labor Standards Act, and other generally applicable laws because the intent was to focus on challenges to EPA’s core work in implementing environmental laws. Likewise, we excluded bankruptcy cases and cases heard in state court unless they were moved to federal court. To determine if the data were reliable for our purposes, we checked them for completeness and legitimate values. When we were uncertain of the data’s accuracy, we requested clarification from the source of the data. Within each database, we checked for duplicate records and either combined data across records into one record or removed unnecessary records. To compile a list of all cases of EPA lawsuits, we needed to identify duplicate cases across the two databases. Because the common field in the two systems—court number—is not kept in the same format, it was necessary for us to standardize court numbers into one format. To do so, we used the standard court number format used in the federal judiciary’s Public Access to Court Electronic Records (PACER) system, an electronic public-access service that allows users to obtain case and docket information from federal appellate, district, and bankruptcy courts. After electronically processing reports of matched and unmatched cases, we conducted extensive manual review of the data to (1) confirm that matched cases from the two databases were in fact the same and (2) identify cases that were the same but were still not found with the electronic process. Manual checks of selected individual court cases were performed using the PACER system to correct information, such as EPA’s role in the case, the names of plaintiffs, and court numbers. We analyzed selected data elements—such as plaintiffs’ names, filing and disposition dates, and relevant statute—over time to identify any trends in litigation. We also used the data on plaintiffs to identify categories of plaintiffs that have filed suit against EPA. To do this analysis, we used a process known as content analysis, searching national databases for information on each plaintiff and then using this information to code the plaintiffs according to rules developed by our internal team of analysts and specialists in program evaluation methods. Our team created 13 categories into which plaintiffs were coded (see table 3). We evaluated the reliability of our plaintiff categories using two pretests on simple random samples of 40 and 41 plaintiffs, respectively. A minimum of five analysts independently coded the samples to ensure they had a common understanding of the categories and made the same coding decisions. For each pretest, we estimated the analysts’ agreement rates adjusted for the possibility of agreement by chance. These “kappa” statistics estimate the reliability of each category. In the first pretest, the analysts agreed 74 percent of the time across all categories and 71 to 91 percent of the time for the individual categories other than “unknown,” using the combined category of “local environmental and citizens’ groups.” On the basis of the results of the first pretest, we refined the definitions of the categories and conducted the second pretest. In the subsequent pretest, the analysts agreed 87 percent of the time across all categories and 84 to 95 percent of the time for the individual categories other than “unknown” and “other.” These agreement rates suggested that the analysts could reliably classify the plaintiffs according to common standards in academic literature on intercoder agreement. Classifying the plaintiffs helped us quantify the number of cases brought each year against EPA by different types of groups. After validating the categories, we searched in public databases of organizations for information that would allow us to classify each plaintiff. We used the Nexis Encyclopedia of Associations and the Nexis Company Profile data systems, both of which identify organizations by North American Industry Classification System and Standard Industrial Classification. To the extent possible, we used these codes to classify plaintiffs. If these sources were not sufficient, we searched the Web pages of each organization for self-reported information. For the “individual” category of plaintiff, we confirmed through court records that those people were in fact suing as private individuals and not, for example, as mayors or attorneys general of a state. In some cases, insufficient information was available in Justice’s databases to determine a given plaintiff’s identity. In such cases, we looked up the case in the PACER system. Six analysts conducted the content analysis of plaintiffs in the Case Management System and the Legal Information Office Network System. Discrepancies in coding were discussed, and agreement was reached among the analysts or resolved through a group analyst review. To obtain stakeholder perspectives on environmental litigation trends and the factors that underlie them, we interviewed officials from EPA and Justice; representatives from the offices of five state attorneys general and one state environment department; representatives from six environmental groups; and six industry trade associations. We also spoke with a representative of the National Association of Attorneys General. Additionally, we interviewed one academic expert who has published extensively on environmental litigation in legal journals (see table 4). We selected these representatives on the basis of input from government officials and other interviewees. We asked the interviewees for their perspectives about factors that can affect trends in the types of lawsuits against EPA. We then performed a content analysis to group and summarize their responses. Not all stakeholders provided views on all issues, and statements from our sample of stakeholders cannot be generalized to all groups. To determine Justice’s costs for representing EPA in defensive environmental lawsuits and the government’s payments to plaintiffs, we obtained data on three components of costs: (1) Justice’s costs for its attorneys’ time defending EPA, (2) payments for attorney and other costs from the Department of the Treasury’s Judgment Fund for some cases that the government lost, and (3) payments for attorney and other costs by EPA for some cases that the government lost. For the first component, we obtained data from Justice on the number of cases per year that involved any of the 10 statutes in our scope, as well as the number of hours Justice attorneys spent working on these cases. To calculate costs, we multiplied the total hours worked in a given year by that year’s average hourly pay rate—ranging from $41 to $66 per attorney for fiscal years 1998 through 2010—which we received from Justice. To adjust for uncompensated overtime, we reduced the reported annual hours the attorneys worked by 15 percent, an amount that Justice estimated represents overtime worked by its attorneys. To adjust the attorneys’ salaries to include benefits and related agency overhead, we increased the attorneys’ salaries by 84.3 percent, a factor that was provided to us by Justice on the basis of its actual 2009 costs. To ensure that attorney costs are comparable across years, we adjusted annual pay rates by applying the consumer price index for all urban consumers from the Department of Labor, Bureau of Labor Statistics, and inflated all pay rates to constant 2010 dollars. When we reported single payments, however, we did not adjust these figures to constant dollar figures. To determine the second and third components of litigation costs— Treasury’s Judgment Fund and EPA’s payments to plaintiffs—we obtained and analyzed data from Treasury and EPA. First, we obtained and analyzed data from the Department of the Treasury’s Judgment Fund Internet Claims System, which tracks the progress of plaintiffs’ claims for Judgment Fund payments from the time they are sent to Treasury until the time they are paid. To identify data on payments related to the environmental statutes in our scope, we matched Treasury’s data with data from Justice’s two databases and eliminated payments that did not correspond with cases in our scope. When information was determined to be missing, we asked Treasury to provide us with additional information. In particular, we learned that Treasury’s data included payments that were issued but were not cashed or were returned; we worked with Treasury to remove these payments to avoid counting these as actual payments and overrepresenting the amount paid from the Judgment Fund. We deleted Superfund cases because we were unable to discern from available information whether the Superfund-related payments were for attorney fees and court costs or for reimbursements of site cleanups, which is a different category of payment than what is within our scope. Similarly, to identify EPA payments to plaintiffs within our scope under the Equal Access to Justice Act, we obtained EPA data on payments made to plaintiffs and manually matched these cases with the cases in Justice’s two databases. When certain case information was determined to be missing, we did additional research on these cases using PACER and corrected the data. Inconsistent formatting of key data elements produced significant problems for completing our analysis and required significant manual review by us and Justice. If we did not find the necessary information from available sources, we asked EPA to send us relevant portions of the internal voucher packages used to request payment. We conducted this performance audit from June 2010 through July 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In the United States, parties involved in litigation generally bear their own attorney fees and costs. For policy reasons, including encouraging citizens to bring suits to enforce the law, Congress has provided exceptions to this rule for cases brought under several statutes, such as the Civil Rights Act. In these instances, as well as some common-law exceptions, a prevailing plaintiff may seek awards of its attorney fees and court costs from the losing party. Historically, the federal government had sovereign immunity from some of these exceptions, but in some instances, the statutes also waived sovereign immunity so that a court could award fees and costs against the federal government, as well as a private party. According to Justice, many of the key environmental statutes’ provisions authorizing award of attorney fees and costs apply to the federal government. For example, EPA pays attorney fees under several provisions of the Clean Air Act and the Clean Water Act. Furthermore, in 1980, the Equal Access to Justice Act (EAJA) was enacted to waive sovereign immunity for the remaining statutes authorizing award of fees and costs, as well as to authorize the awarding of fees and costs in other cases. As the 1980 conference committee report for EAJA explains, the act’s premise is that individuals, corporations, partnerships, and labor and other organizations did not seek review of or defend against unreasonable government actions because of the expense involved, which was compounded by the disparity in expertise and resources between the government and the individual or organization involved. EAJA was intended to help certain individuals, partnerships, corporations, and labor and other organizations by paying the attorney fees and other costs if the federal government brought an administrative or judicial action and lost because the action was not substantially justified. EAJA seeks to (1) encourage parties that are the subject of unreasonable federal government action to seek reimbursement for attorney fees and other costs, (2) restrain overzealous regulators, and (3) ensure that the government pays for the cost of refining and formulating public policy. EAJA authorizes the award of the following: Court costs of prevailing parties against the United States in any civil action. These costs may include fees for the clerk and marshal, reporter, printing, witnesses, copies, docket fees, and interpreters and court-appointed experts and may include an amount equal to the filing fees. Attorney fees and expenses against the United States of a prevailing party to the same extent as any other party, codified at 28 U.S.C. § 2412(b) and hereinafter referred to as “subsection b.” That is, where there is a statutory or common-law exception that provides for award of fees to a prevailing party, such exceptions also apply to the federal government. Regarding the 10 environmental statutes covered in this report, many of the relevant provisions under which EPA may be sued provide for award of such fees. However, EAJA makes EPA subject to fee awards under all the environmental statutes’ provisions authorizing courts to award attorney fees and expenses. According to Justice, many of the environmental suits against EPA involve provisions that authorize fee awards independent of EAJA, but a small number may fall into EAJA subsection b. A feature of this subsection is that it does not itself limit the eligibility of prevailing plaintiffs, nor expressly limit the hourly rate of attorney fees; however, the statute requires that the fees be “reasonable.” Additionally, any award of fees made under this section is subject to any limitations that would apply to analogous awards against private parties, as may be provided by the underlying statute. Attorney fees and expenses of a prevailing party in cases even when no statutory or common-law exception exists to make a private defendant liable for such fees, unless the court finds that the position of the United States was substantially justified or that special circumstances make an award unjust. This subsection of EAJA, codified at 28 U.S.C. § 2412(d) and hereinafter referred to as “subsection d,” authorized the award of these fees against the federal government in civil court actions, while another subsection authorized the award of these fees in certain agency adjudications such as when a party files an appeal of an agency decision to the EPA Environmental Appeals Board. Two of the 10 laws covered in this report—the Federal Insecticide, Fungicide, and Rodenticide Act and the Federal Food, Drug, and Cosmetic Act—as well as some individual provisions of other statutes, do not authorize payment of fees to prevailing parties. Cases brought against EPA under these statutes and provisions, then, fall into EAJA subsection d. This subsection limits the prevailing plaintiff’s eligibility to receive payment by defining an eligible party as, at the time the lawsuit is filed, either an individual with a net worth below $2 million or a business owner or any partnership, corporation, association, local government, or organization with a net worth below $7 million and fewer than 500 employees. Tax-exempt nonprofit organizations and certain agricultural marketing cooperatives are considered parties regardless of net worth. Payment of attorney fees by federal agencies under statutes independently authorizing awards against federal agencies and under subsection b are made from the Judgment Fund, which is a permanent, indefinite appropriation available to pay many money judgments against the United States. Payment of attorney fees by federal agencies under subsection d is generally made from agency appropriations. Table 5 summarizes key attributes of the three authorizing situations under which EPA may pay fees and costs. Originally, EAJA provided for governmentwide reporting on its use and cost. For judicial proceedings, EAJA required the Director of the Administrative Office of the U.S. Courts to report annually to Congress on EAJA court activity, including the number, nature, and amounts of awards; claims involved; and any other relevant information deemed necessary to aid Congress in evaluating the scope and effect of awards under the act. The responsibility for this reporting was transferred to the Attorney General in 1992. In addition, EAJA required the Chairman of the Administrative Conference of the United States to submit an EAJA report annually to Congress on administratively awarded fees and expenses. Then, in December 1995, the Federal Reports Elimination and Sunset Act of 1995 repealed the Attorney General’s reporting requirement for fees and expenses awarded under EAJA and also discontinued reporting of governmentwide administrative awards of fees and costs under EAJA after fiscal year 1994. Currently, there are no statutory requirements in effect for agency or governmentwide reporting of payments made under EAJA for either administrative or judicial proceedings. According to officials from the Administrative Conference of the United States, the conference has begun to obtain and compile such information for fiscal year 2010, noting that there has been continued interest in Congress (including pending legislation) regarding data about payments under EAJA. Officials told us the conference has requested EAJA data from 50 government agency conference members, as well as a few additional agencies that had previously reported EAJA activity to the conference. The chairman plans to publish a report for fiscal year 2010 later in 2011. We have previously reported certain governmentwide EAJA data, as well as data focused on selected agencies. In 1996, we reported data on the number of cases and amounts of awarded plaintiff attorneys’ fees exceeding $10,000 against nine federal agencies for cases closed during fiscal years 1993 and 1994. In 1998, we provided information on the history of EAJA, the extent to which one provision of the act was used governmentwide from 1982 to1994, and the provision’s use by the Department of Labor and associated agencies. The governmentwide data showed, among other things, that the Departments of Health and Human Services and of Veterans Affairs accounted for most EAJA payments in court proceedings, under the provision that applies when the substantive law does not authorize award of attorney fees and costs. This appendix provides data on payments for attorney fees and court costs made by the Department of the Treasury for fiscal year 2003 through fiscal year 2010 and by the Environmental Protection Agency (EPA) for fiscal year 2006 through fiscal year 2010. Payments for attorney fees and expenses and court costs may be made to a plaintiff or directly to a plaintiff’s attorney. In cases involving multiple plaintiffs, one or more plaintiffs or their attorneys may receive payment. The first plaintiff named in the case title does not necessarily receive the payment. Table 6 shows payments from the Judgment Fund. In addition to payments from the Judgment Fund, EPA made payments under EAJA to successful plaintiffs. Table 7 shows payments made by EPA for fiscal year 2006 through fiscal year 2010. In addition to the individual contact named above, Susan Iott (Assistant Director), Jacques Arsenault, Elizabeth Beardsley, Jennifer Beveridge, Colleen Candrl, Ellen W. Chu, Bernice Dawson, Cindy Gilbert, Cynthia Grant, Anne K. Johnson, Rebecca Makar, Mehrzad Nadji, and Jeff Tessin made key contributions to this report.
|
The Environmental Protection Agency (EPA) faces numerous legal challenges as it implements the nation's environmental laws. Several statutes, such as the Clean Air and Clean Water Acts, allow citizens to file suit against EPA to challenge certain agency actions. Where EPA is named as a defendant, the Department of Justice provides EPA's legal defense. If successful, plaintiffs may be paid for certain attorney fees and costs. Payments are made from the Department of the Treasury's Judgment Fund--a permanent fund available to pay judgments against the government, as well as settlements resulting from lawsuits--or EPA's appropriations. For this review, GAO was asked to examine (1) the trends in and factors affecting environmental litigation for fiscal years 1995 through 2010 and (2) Justice's recent costs and recent plaintiff payments from the Judgment Fund and EPA. To conduct this review, GAO obtained and analyzed data from two Justice databases on cases filed under 10 key environmental statutes. To gain stakeholder views on any trends and factors that might affect them, GAO interviewed representatives of environmental and industry groups, state attorneys general, and other experts. GAO estimated the costs of litigation handled by Justice attorneys and payments made for attorney fees and court costs from the Judgment Fund and EPA funds. No trend was discernible in the number of environmental cases brought against EPA from fiscal year 1995 through fiscal year 2010, as the number of cases filed in federal court varied over time. Justice staff defended EPA on an average of about 155 such cases each year, or a total of about 2,500 cases between fiscal years 1995 and 2010. Most cases were filed under the Clean Air Act (59 percent of cases) and the Clean Water Act (20 percent of cases). According to stakeholders GAO interviewed, a number of factors--particularly a change in presidential administration, new regulations or amendments to laws, or EPA's not meeting statutorily required deadlines--affect environmental litigation. The costs borne by Justice, EPA, and Treasury also varied without a discernible trend from fiscal year 1998 through fiscal year 2010. Justice spent at least $43 million, or $3.3 million annually, to defend EPA in court during this time. In addition, owing to statutory requirements to pay certain successful plaintiffs for attorney fees and costs, Treasury paid about $14.2 million from fiscal year 2003 through fiscal year 2010--about $1.8 million per fiscal year--to plaintiffs in environmental cases. EPA paid approximately $1.4 million from fiscal year 2006 through fiscal year 2010--about $280,000 per fiscal year--to plaintiffs for environmental litigation claims under relevant statutes. (All amounts are given in constant 2010 dollars.) Justice officials said that they negotiate payments with the successful plaintiffs, who generally receive less than originally requested. Complicating efforts to analyze trends in cases and costs is that Justice maintains data on environmental cases in two separate data systems and does not have a standard approach for maintaining the data. As a result, it is difficult to identify and summarize the full set of cases and costs managed by Justice. Nonetheless, using an iterative electronic and manual process, GAO was able to merge the two sets of data for its purposes. Justice officials said that they do not need to change their approach to managing the data, however, because they do not use it to summarize case data agencywide. Moreover, the officials said they lack resources to adapt their aging systems to accept additional data. GAO is making no recommendations in this report. GAO provided a draft of this report to the agencies for comment. Justice and Treasury had technical comments, which were incorporated, while EPA had no comments.
|
From fiscal years 2007 through 2012, DOE’s budget requests rose in nominal terms from about $23.6 billion to $29.5 billion, and its appropriations rose over that time from about $23.8 billion to $26.3 billion, increasing to almost $33.9 billion in fiscal year 2009. DOE requested approximately $27.2 billion for fiscal year 2013, as shown in table 1. According to agency documents, in addition to aligning its fiscal year 2013 budget request with its strategic plan, DOE released a technology review in September 2011 that provided a framework for preparing budgets for some of its energy and science programs. Since then, according to these documents, DOE has worked closely with the Office of Management and Budget to develop, under its strategic plan, new priority goals—including maximizing the benefits of investments in scientific facilities—for fiscal year 2013. Through the Recovery Act, Congress provided approximately $8 billion for three existing DOE programs: (1) $0.4 billion in initial funding for the Advanced Research Projects Agency-Energy to support advanced energy research, (2) $2.5 billion for the Loan Guarantee Program to guarantee loans for innovative energy projects, and (3) $5 billion for the Weatherization Assistance Program to make energy efficiency improvements to the homes of low-income families. Since these funding increases were implemented, we reviewed the programs receiving the funds and made several recommendations intended to improve their management. In addition, under the Advanced Technology Vehicles Manufacturing loan program, which received some Recovery Act funds, DOE can provide up to $25 billion in loans for fuel-efficient vehicle projects, but at the time of our review, it could not be assured that projects would be delivered as agreed. We also recently reported that, among the 92 renewable energy-related initiatives DOE implemented in fiscal year 2010, the Recovery Act established 7 and increased funding for 36. The America COMPETES Act of 2007 established the Advanced Research Projects Agency-Energy (ARPA-E) within DOE to overcome the long-term and high-risk technological barriers to the development of energy technologies. However, ARPA-E did not receive an appropriation until 2 years later, in 2009, in the Recovery Act. Including the Recovery Act funds and subsequent appropriations, ARPA-E has received about $855 million in appropriations. According to ARPA-E’s budget director, as of March 1, 2012, the program has awarded no more than the $521.7 million that, as we reported in January 2012, was provided to universities, public and private companies, and national laboratories to fund 181 projects that attempt to make transformational advances to a variety of energy technologies, including high-energy batteries and renewable fuels. This official told us that ARPA-E has not yet selected award recipients for fiscal year 2012. Award winners must meet cost-share requirements, through either in-kind contributions or outside nonfederal funding sources. ARPA-E is required by statute to achieve its goals through energy technology projects that, among other things, accelerate transformational technological advances in areas that industry by itself is not likely to undertake because of technical and financial uncertainty. At the same time, the agency’s director is required to ensure, to the maximum extent practicable, that its activities are coordinated with, and do not duplicate the efforts of, programs and laboratories within DOE and other relevant research agencies. Table 2 shows the program’s budget requests and appropriations since receiving an appropriation through the Recovery Act in fiscal year 2009. In January 2012, we reported that ARPA-E uses several selection criteria in making awards, although its requirements for information on private sector funding could be improved. For example, we reported that ARPA- E’s program directors spent time and resources to determine the extent of prior funding for proposed ARPA-E projects. Also, our review suggested that most ARPA-E projects could not have been funded solely by the private sector. Furthermore, according to ARPA-E officials and documents, agency officials have taken steps to coordinate with other DOE offices in advance of awarding ARPA-E funds to help avoid duplication of efforts. We recommended that ARPA-E consider providing applicants guidance with a sample response explaining prior sources of funding, requiring applicants to provide letters from investors explaining why they are not willing to fund proposed projects, and using third-party venture capital data to identify applicants’ prior funding. DOE agreed with our recommendations. Under the Energy Policy Act of 2005, the Loan Guarantee Program (LGP) was created to provide loan guarantees for innovative energy technologies. Until February 2009, the LGP was working exclusively under section 1703 of the act, which authorized loan guarantees for new or innovative energy technologies that had not yet been widely commercialized in the United States. At that time, Congress had authorized DOE to guarantee approximately $42.5 billion in section 1703 loans. Although Congress had provided funds to DOE to cover the program’s administrative costs, it had not appropriated funds to pay the “credit subsidy costs” of these guarantees. Credit subsidy costs are the government’s estimated net long-term cost, in present value terms, of direct or guaranteed loans over the entire period the loans are outstanding (not including administrative costs). In February 2009, the Recovery Act amended the Energy Policy Act of 2005, adding section 1705, which made certain commercial technologies eligible for loan guarantees if they could start construction by September 30, 2011. The Recovery Act also provided $6 billion in appropriations—later reduced by transfer and rescission to about $2.5 billion—to cover DOE’s credit subsidy costs for an estimated $18 billion in additional loan guarantees. In fiscal year 2011, Congress appropriated about $170 million to cover subsidy costs of section 1703 loan guarantees for the first time. Table 3 shows the program’s budget requests and appropriations since fiscal year 2008. In March 2012, we reported that DOE had made $15 billion in loan guarantees and conditionally committed to an additional $15 billion as of September 30, 2011. However, we also reported that the program does not have the consolidated data on application status needed to facilitate efficient management and program oversight. In addition, the program adhered to most of its established process for reviewing applications, but we reported that its actual process differed from its established process at least once on 11 of the 13 applications we reviewed. DOE agreed with our recommendations to (1) ensure that its records management system contains documents supporting past decisions, as well as those in the future, and (2) regularly update program policies and procedures. DOE disagreed with our recommendation to commit to a timetable to fully implement a consolidated system to provide information on program applications and measure overall program performance, stating that it did not agree to a hard timetable for implementing the recommendation. We continue to believe that DOE should commit to developing such a system in a timely fashion. The Recovery Act appropriated $5 billion for the Weatherization Assistance Program to help low-income families reduce their energy bills by making long-term energy efficiency improvements to their homes. This appropriation represented a significant funding increase for a program that had received about $225 million per year in recent years. As of February 28, 2012, we found that DOE had awarded 58 state-level grant recipients approximately $4.84 billion to implement the Weatherization Assistance Program under the Recovery Act, and these recipients reported spending about $4.22 billion and weatherizing 709,138 homes, exceeding the program’s production target of 607,000 homes. Table 4 shows the program’s budget requests and appropriations since fiscal year 2007. In December 2011, we reported that some grant recipients had been able to exceed their production targets because of a lower average cost of weatherizing homes and lower training and technical assistance expenses than anticipated. In addition, most recipients reported experiencing more implementation challenges in the first year of the Recovery Act than in the third year. We also reported that a long-term Weatherization Assistance Program goal is to increase energy efficiency through cost-effective weatherization work and that March 2010 cost- benefit estimates from an Oak Ridge National Laboratory study indicate that energy savings will likely exceed the program’s costs. That is, every $1 spent on the weatherization program for 2009 through 2011 would result in almost $2 in energy savings over the useful life of the investment; the laboratory plans to issue more definitive estimates in 2013. Also in our December 2011 report, we discussed actions DOE took in response to a recommendation we made in a May 2010 report, that DOE clarify production targets and funding deadlines, among other things; DOE officials provided documentation concerning targets but did not provide clarification of the consequences for not meeting the targets. In response to concerns about whether or not program requirements were being met, our May 2010 report included recommendations to DOE to clarify its guidance, production targets, funding deadlines, and associated consequences. DOE’s program guidance stated that recipients could spend Recovery Act funds until March 31, 2012. According to DOE, several grant recipients had requested additional time to spend these funds. Between the issuance of our two reports, in September 2011, the Office of Management and Budget released a memorandum stating that Recovery Act funds should be spent by September 2013. In our December 2011 report, we found that, as of November 2011, DOE had not determined if an extension would be available for grant recipients. In January 2012, DOE issued guidance stating that it was offering grant recipients an opportunity to modify the original March 31, 2012 funding deadline. In December 2007, Congress enacted the Energy Independence and Security Act of 2007, which mandates more stringent average fuel economy standards for newly manufactured passenger vehicles sold in the United States by model year 2020 and established in DOE the Advanced Technology Vehicles Manufacturing (ATVM) loan program, to provide loans for projects to produce more fuel-efficient passenger vehicles and their components. The ATVM loan program is to provide up to $25 billion in loans for more fuel-efficient vehicles and components. Congress also provided $7.5 billion to pay the required credit subsidy costs of the loans, as shown in table 5. GAO, Department of Energy: Advanced Technology Vehicle Loan Program Implementation Is Under Way, but Enhanced Technical Oversight and Performance Measures Are Needed, GAO-11-145 (Washington, D.C.: Feb. 28, 2011). had used 44 percent of the $7.5 billion allocated to pay credit subsidy costs, which is more than was initially anticipated. These higher credit subsidy costs were, in part, a reflection of the risky financial situation of the automotive industry at the time the loans were made. As a result of the higher credit subsidy costs, we reported that the program may be unable to loan the full $25 billion allowed by statute. We also reported that the ATVM loan program had set procedures for overseeing the financial and technical performance of borrowers and had begun using the procedures to oversee the loans; at the time of our report, however, it had not yet engaged the engineering expertise needed for technical oversight, as called for by its procedures. As a result, we reported that without qualified oversight to analyze the information submitted by the borrowers and to provide technical monitoring, DOE could not be adequately assured that the borrowers are delivering the vehicle and component projects as required by the loan agreements. In addition, we reported that DOE had not developed sufficient performance measures that would enable it to fully assess progress toward achieving its program goals. DOE disagreed with our recommendations that the agency accelerate its efforts to engage the expertise needed for effective oversight and develop sufficient performance measures, although we continue to believe that the agency should take these actions. In February 2012, we reported that DOE had implemented 92 renewable energy-related initiatives in fiscal year 2010. These initiatives supported every renewable energy source in our review, including bioenergy, solar, and wind, and most initiatives supported more than a single energy source. In addition, more than 70 percent of these initiatives supported both the public and private sectors. These initiatives were distributed across multiple federal responsibilities, with the largest percentage of DOE’s initiatives supporting research and development. Approximately one-third (36) of the 106 existing federal renewable energy-related initiatives that received additional funding under the Recovery Act were implemented by DOE, primarily involving research and development of new renewable energy technologies. Overall, the Recovery Act affected 49 DOE initiatives: 7 were established, 36 received more funding, and 11 expanded or had their scope changed. Several of the renewable energy-related initiatives we reviewed have expired or will expire, in full or in part, because of the expiration of legislative authority, depletion of available appropriations, or some other expiration under the law as written as of fall of 2011.DOE. We have previously reported on several areas at DOE that may provide opportunities for achieving increased savings and enhancing government revenue. Areas that may provide opportunities for increased savings include (1) contractor support costs and (2) potential overlap of effort across certain activities for programs to reduce diesel emissions from mobile sources. An area that may provide an opportunity for enhanced government revenue concerns DOE’s uranium inventories, which are worth potentially billions of dollars to commercial nuclear power plants that can use the material as fuel in their reactors. DOE spends 90 percent of its annual budget—which totaled $27 billion for fiscal year 2011—on the contractors that carry out its diverse missions and operate its sites nationwide. In January 2012, we reported that DOE and contractors at some DOE sites, including the Office of Science, have been carrying out a variety of efforts since 2007 to streamline and reduce the costs of sites’ support functions. Such functions include procuring needed goods and services; recruiting and hiring workers; managing health and retirement benefits; maintaining facilities and infrastructure; and providing day-to-day accounting, information technology, and security. In addition, we found that contractors at sites have undertaken their own streamlining and cost-reduction efforts, ranging from automating hiring, training, or other human resources activities to reducing employee health care and pension costs. Also in February 2012, in our annual report on overlap and duplication of federal programs that may result in inefficient use of taxpayer funds, we recommended that DOE assess whether further opportunities could be taken to streamline support functions, estimated to cost over $5 billion, at its contractor-managed laboratories and other sites, including Office of Science sites, in light of contractors’ historically fragmented approach to providing these functions. DOE agreed with the recommendation. Diesel engines play a vital role in public transportation, construction, agriculture, and shipping, largely because they are more durable and reliable than gasoline-powered engines, as well as 25 to 35 percent more energy efficient. However, exhaust from diesel engines is a pervasive and harmful form of air pollution that affects public health and the environment. Table 6 shows funding, by program, for DOE activities to reduce diesel emissions from mobile sources. In February 2012, we reported that federal grant and loan funding for activities that reduce mobile source diesel emissions is fragmented across 14 programs at DOE, the Department of Transportation (DOT), and the Environmental Protection Agency (EPA). Moreover, we reported that each of these programs overlaps with at least one other program in the specific activities they fund, the program goals, or the eligible recipients of funding. In addition, we found that these programs generally do not collaborate. We previously reported that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. To help ensure the effectiveness and accountability of federal funding that reduces diesel emissions, we recommended that DOE, DOT, and EPA establish a strategy for collaboration in reducing mobile source diesel emissions. DOE agreed with our recommendation. Uranium is used in fuel for nuclear power plants. Twenty percent of our nation’s electricity comes from nuclear power, and growing anxiety over climate change generated by ever-growing demand for fossil fuels has sparked interest in increasing the use of nuclear power, despite ongoing concerns about the safety of such power in light of the March 2011 nuclear accident in Japan. In September 2011, we reported that a healthy domestic uranium industry is considered essential to ensuring that commercial nuclear power remains a reliable option for supporting the nation’s energy needs. DOE maintains large inventories of uranium that it no longer requires for nuclear weapons or as fuel for naval nuclear propulsion reactors. A large portion of these inventories consists of depleted uranium hexafluoride, otherwise known as “tails”—a byproduct of the uranium enrichment process. Recent increases in uranium prices could transform these tails into a lucrative source of revenue for the government. In addition, DOE maintains thousands of tons of natural uranium, which likewise could be sold to utilities or others for additional revenue. GAO, Nuclear Material: DOE Has Several Potential Options for Dealing with Depleted Uranium Tails, Each of Which Could Benefit the Government, GAO-08-606R (Washington, D.C.: Mar. 31, 2008). barter transactions. We reported that while DOE received no cash from the transactions, our review found that the agency allowed a sales agent to keep cash from the sales, which DOE would otherwise have owed to the United States Treasury, thus violating the miscellaneous receipts statute. We therefore reported that Congress should consider providing DOE with explicit authority to barter excess uranium and to retain the proceeds from barters, transfers, and sales. Likewise, Congress could direct DOE to sell uranium for cash and make those proceeds available by appropriation for decontamination and decommissioning expenses at DOE’s uranium enrichment plants. Congress has taken some actions in response to our work. Chairman Stearns, Ranking Member DeGette, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information regarding this testimony, please contact Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Kim Gianopoulos, Chad M. Gorman, Carol Herrnstadt Shulman, Kiki Theodoropoulos, Jeremy Williams, Michelle R. Wong, and Arvin Wu made key contributions to this testimony. Department of Energy: Advanced Research Projects Agency-Energy Could Improve Its Collection of Information from Applications, GAO-12-407T (Washington, D.C.: Jan. 24, 2012). 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue, GAO-12-342SP (Washington, D.C.: Feb. 28, 2012). Diesel Pollution: Fragmented Federal Programs that Reduce Mobile Source Emissions Could be Improved, GAO-12-261 (Washington, D.C.: Feb. 7, 2012). Renewable Energy: Federal Agencies Implement Hundreds of Initiatives, GAO-12-260 (Washington, D.C.: Feb. 27, 2012). Department of Energy: Additional Opportunities Exist to Streamline Support Functions at NNSA and Office of Science Sites, GAO-12-255 (Washington, D.C.: Jan. 31, 2012). Department of Energy: Advanced Research Projects Agency-Energy Could Benefit from Information on Applicants’ Prior Funding, GAO-12-112 (Washington, D.C.: Jan. 13, 2012). Recovery Act: Progress and Challenges in Spending Weatherization Funds, GAO-12-195 (Washington, D.C.: Dec. 16, 2011). DOE Loan Guarantees: Further Actions Are Needed to Improve Tracking and Review of Applications, GAO-12-157 (Washington, D.C.: Mar. 12, 2012). Excess Uranium Inventories: Clarifying DOE’s Disposition Options Could Help Avoid Further Legal Violations, GAO-11-846 (Washington, D.C.: Sept. 26, 2011). Nuclear Material: DOE’s Depleted Uranium Tails Could be a Source of Revenue for the Government, GAO-11-752T (Washington, D.C.: June 13, 2011). Department of Energy: Advanced Technology Vehicle Loan Program Needs Enhanced Oversight and Performance Measures, GAO-11-745T (Washington, D.C.: June 9, 2011). Recovery Act: Status of Department of Energy’s Obligations and Spending, GAO-11-483T (Washington, D.C.: Mar. 17, 2011). Department of Energy: Advanced Technology Vehicle Loan Program Implementation Is Under Way, but Enhanced Technical Oversight and Performance Measures Are Needed, GAO-11-145 (Washington, D.C.: Feb. 28, 2011). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Understanding the impact of budget-related considerations has become particularly important as Congress and the administration seek to decrease the cost of government while improving its performance. In recent years, Congress has authorized large increases in funding for DOE. For example, the Recovery Act, which Congress enacted to, among other things, preserve and create jobs and promote economic recovery, provided DOE with more than $41.7 billion in areas such as energy efficiency, renewable energy, and environmental cleanup. This testimony focuses on several key programs and related budget issues at DOE, including (1) the management of selected programs expanded or created by recent funding increases and (2) potential opportunities to achieve savings or enhance revenue. This testimony is based on prior GAO reports from February 2011 to March 2012, and updated with readily available data from DOE. Recent GAO work found that funding increases have expanded or created Department of Energy (DOE) programs with varying results. For example: Advanced Research Projects Agency-Energy (ARPA-E) awards grants to projects that help develop high-risk energy technologies. Since fiscal year 2009 the program has received $855 million to fund energy projects that industry by itself was not likely to undertake. GAO found that ARPA-E uses several selection criteria in awarding funds, but its requirements for information on private funding could be improved. The Loan Guarantee Program provides loan guarantees for innovative energy technologies. DOE has made about $15 billion in loan guarantees and is authorized to make up to $34 billion in additional loan guarantees. GAO found that the program does not have sufficient data to facilitate oversight, and its actual process for reviewing applications has differed from the established process. The Weatherization Assistance Program helps low-income families reduce their energy bills by making long-term energy efficiency improvements to their homes. The American Recovery and Reinvestment Act of 2009 (Recovery Act) provided $5 billion to enhance the programs ability to make energy efficiency improvements to low-income family homes. GAO made recommendations to DOE to clarify the programs production targets (e.g., the number of homes weatherized) and guidance. The Advanced Technology Vehicles Manufacturing Loan Program provides loans for projects to produce more fuel-efficient passenger vehicles and their components. DOE can make up to $25 billion in loans for fuel-efficient vehicles; at the time of GAOs review, DOE could not be assured that projects would be delivered as agreed. GAO also reported that improvements at DOE may provide opportunities for increasing savings and enhancing revenue. For example: Contractor support costs. DOEs management of contractors, who operate DOE sites and represent 90 percent of DOEs budget, has historically been decentralized, or fragmented. This adds to inefficiencies in support functions. Since 2007, DOE and contractors at some DOE sites have had efforts to streamline these functions. GAO recommended that DOE assess whether further opportunities could be taken to streamline such functions. Diesel emissions. DOE, the Department of Transportation, and the Environmental Protection Agency receive federal funding to reduce diesel emissions from mobile sources14 programs in all, which also overlap on certain activities. DOE received $572 million for its 3 programs. GAO recommended that the three agencies establish a strategy for collaboration to reduce diesel emissions from mobile sources. Excess uranium inventories. Uranium is used in fuel for nuclear power plants. GAO reported DOEs excess uranium inventories could be worth billions of dollars in additional revenue as fuel for commercial nuclear power plants. GAO is making no new recommendations in this testimony but continues to believe that implementing the recent recommendations made in the reports discussed should improve DOE program management, achieve savings, and enhance revenue. DOE has generally agreed with most of our recommendations, but disagreed on certain points related to the timing of implementing our recommendations.
|
Recent national policies and federal preparedness efforts have highlighted the importance of enhancing the resilience of the nation’s critical infrastructure, including the electricity grid. Presidential Policy Directive 21, issued in February 2013, established national policy on critical infrastructure security and resilience. The directive expanded the nation’s focus from protecting critical infrastructure against terrorism to protecting critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents. In addition, the directive recognizes that proactive and coordinated efforts are necessary to strengthen and maintain critical infrastructure that is secure and resilient. It also identifies 16 critical infrastructure sectors, including the energy sector—which encompasses the electricity grid—and designates lead federal agencies to coordinate and prioritize security and resiliency activities in each sector. DOE was designated as the lead federal agency for the energy sector. Reflecting the shift in focus in Presidential Policy Directive 21, the December 2013 update to the National Infrastructure Protection Plan elevated security and resilience to be the primary aim of federal critical infrastructure planning efforts. The update established a set of broad national goals for critical infrastructure security and resilience and directed that each of the 16 critical infrastructure sectors update its sector-specific plan—a planning document that complements and tailors the application of the National Infrastructure Protection Plan to the specific characteristics and risks of each critical infrastructure sector. In response, DOE in 2015 led the development of an updated Energy Sector-Specific Plan to help guide and integrate efforts to improve the security and resilience of the energy sector’s critical infrastructure, including the electricity grid. The plan identified three federal priorities for enhancing the security and resilience of the grid: (1) developing and deploying tools and technologies to enhance awareness of potential disruptions, (2) planning and exercising coordinated responses to disruptive events, and (3) ensuring actionable intelligence on threats is communicated between government and industry in a time-sensitive manner. In addition, in 2016 the United States developed an action plan and issued a joint strategy with Canada for strengthening the security and resilience of the North American electricity grid. As table 1 shows, federal roles and responsibilities related to enhancing the resilience of the electricity grid are defined in policy and law, including Presidential Policy Directive 21 and certain provisions of the Fixing America’s Surface Transportation Act of 2015. States and industry also play key roles in enhancing the resilience of the electricity grid. States, through their public utility commissions, regulate retail electricity service and facility planning and siting. States also enact policies that can affect the resilience of the portion of the electricity grid that is within their borders. Industry owns and operates most of the electricity grid, so the actions that owners and operators take to protect and maintain their assets can contribute to grid resilience. In addition, owners and operators of the electricity grid are responsible for complying with mandatory reliability standards that can contribute to grid resilience. DOE, DHS, and FERC reported implementing 27 grid resiliency efforts since 2013 that supported a range of activities and that addressed multiple hazards and federal priorities for enhancing the resilience of the electricity grid. Agency officials reported a variety of results stemming from these efforts. In response to our questionnaire, DOE, DHS, and FERC officials reported implementing 27 efforts since 2013 that aimed at enhancing the resilience of the electricity grid. Of these 27 efforts, 12 were FERC regulatory efforts tied to the agency’s role in reviewing and approving mandatory reliability standards for the bulk power system. FERC officials also reported that the agency oversaw another effort in which it acted on a petition by a private company to provide regulatory findings related to the company’s plan to establish a subscription service for spare critical transmission equipment, including transformers. The remaining 14 efforts—11 implemented by DOE and 3 by DHS—were programmatic in nature. Federal funding for the DOE and DHS grid resiliency activities from fiscal year 2013 through fiscal year 2015 totaled approximately $240 million. The 27 efforts that DOE, DHS, and FERC officials reported implementing supported a range of activities, addressed a variety of potential threats and hazards, and addressed federal priorities for enhancing the resilience of the electricity grid (see app. II for more information on each effort). As table 2 shows, the reported federal grid resiliency efforts supported a range of activities, with the most prevalent being: emergency preparedness and response activities (e.g. providing coordination, planning, training, and exercise programs to prepare for potential disaster operations; providing situational awareness during an event; coordinating response efforts; and helping facilitate system restoration); research and development activities (e.g. pursuing tools, technologies, and demonstrations aimed at bringing new and innovative technologies to maturity and helping them transition to industry); modeling, analytics, and risk assessment activities (e.g. modeling, simulation, and analysis of electricity grid risks and vulnerabilities); and standard-setting activities (e.g. the development or approval of standards for industry). For example, DOE reported that its Strategic Transformer Reserve effort supported emergency preparedness and response activities through planning for the potential loss of large, high-power transformers by evaluating the feasibility of establishing a reserve of those transformers for use during an emergency. Similarly, FERC reported that its efforts supported emergency preparedness and response activities as well as activities to set standards by approving a reliability standard that requires owners and operators of the electricity grid to develop and implement procedures to mitigate the potential effects of geomagnetic disturbances on the bulk power system. In addition, DHS reported that its Recovery Transformer Program supported research and development activities by designing and demonstrating a type of rapidly deployable large, high- power transformer for use in the event of the unexpected loss of multiple large, high-power transformers. As table 3 shows, the agencies reported that their federal grid resiliency efforts addressed a range of threats and hazards, including cyberattacks (i.e. computer-related attacks); physical attacks (e.g. attacks on physical infrastructure such as targeted shooting of transformers or intentional downing of power lines); natural disasters (e.g. extreme weather events and geomagnetic disturbances); and operational accidents (e.g. unintentional equipment failures or operator error). For example, DOE and FERC reported implementing several grid resiliency efforts to address the threat of cyberattacks. DOE’s Electricity Subsector Cybersecurity Capability Maturity Model effort was a public-private partnership that developed a tool kit modeled on a common set of industry-vetted cybersecurity practices; the effort made this tool kit available to the electricity industry to help owners and operators evaluate, prioritize, and improve their cybersecurity capabilities. Similarly, of the 12 reliability standards FERC approved, several require owners and operators of the electricity grid to take actions to mitigate the threat posed by cyberattacks on the bulk power system. As table 4 shows, the reported federal grid resiliency efforts collectively addressed each of the three federal priorities for enhancing the security and resilience of the electricity grid that were identified in the 2015 Energy Sector-Specific Plan. For example, DHS’s Solar Storm Mitigation effort addressed the federal priority of developing and deploying tools and technologies to enhance awareness of potential disruptions; the effort addressed this priority by providing owners and operators of the electricity grid with advanced and actionable information about anticipated impacts of a solar storm. Similarly, DOE’s Cybersecurity Risk Information Sharing Program addressed the federal priority to ensure actionable intelligence on threats is communicated between government and industry in a time- sensitive manner; the effort addressed this priority by facilitating the timely sharing of unclassified and classified cybersecurity threat information and developing situational awareness tools to better identify, prioritize, and coordinate the protection of critical electricity infrastructure. In their questionnaire responses, agency officials reported a variety of results from both ongoing and completed federal grid resiliency efforts. As shown in the selected examples in table 5, these results included the development, and in some cases the deployment, of new technologies and analytical tools; the planning and exercising of coordinated responses to disruptive events; and improved coordination and information sharing between the federal government and industry related to potential cyberattacks and other threats or hazards to the electricity grid. We found that the 27 federal efforts to enhance the resilience of the electricity grid were fragmented across DOE, DHS, and FERC and overlapped to some degree, but we did not find any instances of duplication among these efforts. In their questionnaire responses, agency officials reported engaging in a number of activities and mechanisms to coordinate their efforts and avoid duplication. These activities and mechanisms include serving as members on formal coordinating bodies that bring together federal, state, and industry stakeholders in the energy sector to discuss resiliency issues on a regular basis; contributing to the development of federal plans and reviews that address grid resiliency gaps and priorities; and participating in direct coordination activities at the program level. According to our analysis of agency questionnaire responses, federal grid resiliency efforts were fragmented and overlapped to some degree, but none were duplicative. In addition, industry group representatives we interviewed did not identify any instances of duplication among federal grid resiliency efforts. Fragmentation. The 27 federal efforts to enhance the resilience of the electricity grid were fragmented in that they were implemented by three different agencies—DOE, DHS, and FERC—and addressed the same broad area of national need: enhancing the resilience of the electricity grid. We have previously reported that fragmentation has the potential to result in duplication of resources. For example, fragmentation can lead to technical or administrative functions being managed separately by individual agencies, when these functions could be shared among programs. However, we also have reported that fragmentation, by itself, is not an indication that unnecessary duplication of efforts or activities exists. There can be advantages to having multiple federal agencies involved in a broad area of national need; for example, agencies can tailor initiatives to suit their specific missions and needs, among other things. In the case of federal grid resiliency efforts, we found that DOE, DHS, and FERC generally have tailored their efforts to contribute to their specific missions and needs. For example, DOE’s 11 efforts related to its strategic goal to support a more secure and resilient U.S. energy infrastructure; DHS’s 3 efforts addressed its strategic priority to enhance critical infrastructure security and resilience by, among other things, promoting resilient critical infrastructure design; and FERC’s 13 efforts related to the agency’s roles in reviewing and approving reliability standards and regulating the interstate transmission of electricity. Moreover, fragmentation of federal grid resiliency efforts within agencies is limited—10 of the 11 DOE efforts, all 13 FERC efforts, and all 3 DHS efforts were implemented by one organization within each respective agency. Overlap. We found that 23 of the 27 federal grid resiliency efforts overlapped to some degree with at least one other effort in that they addressed similar goals. These overlaps included: 12 efforts with similar goals related to enhancing the cybersecurity of 4 with similar goals related to enhancing the resilience and availability of large, high-power transformers; 3 with similar goals related to enhancing the grid’s resilience to 2 with similar goals related to enhancing energy storage technology; 2 with similar goals related to enhancing the resilience of the grid’s distribution system. As figure 1 illustrates, we also found that all but one federal grid resiliency effort overlapped to some degree with at least one other effort by supporting similar types of activities to achieve their goals. Duplication. We did not find any instances of duplication among the 27 federal grid resiliency efforts because none of the efforts had the same goals or engaged in the same activities. For example, although 4 efforts overlapped in that they had similar goals related to enhancing the resilience of large, high-power transformers and improving their availability, those efforts were not duplicative because their goals were not the same. Specifically, DHS’s Recovery Transformer Program, begun in 2008 and completed in 2014, aimed to design and demonstrate a rapidly deployable large, high-power transformer that could be used to enable rapid recovery of the grid in the event of multiple large, high-power transformer failures. In contrast, DOE’s Transformer Resilience and Advanced Components Program, launched in 2016, is focused on ensuring the resilience of aging transformers and accelerating the development, demonstration, and deployment of next-generation transformer components. Furthermore, DOE’s Strategic Transformer Reserve effort is an analytical and planning activity with a goal of developing a plan for Congress related to establishing a strategic transformer reserve. Similarly, a fourth effort led by FERC was distinct from the other three efforts in that its goal was to act on a petition from a private company for regulatory findings related to the company’s plan to establish a subscription service for spare critical transmission equipment, including transformers. In their questionnaire responses, DOE, DHS, and FERC reported coordinating with each other on their federal grid resiliency efforts through a variety of activities and mechanisms. In particular, agency officials associated with all of the programmatic efforts that we identified as having overlapping characteristics (in that they supported similar goals and types of activities) reported coordinating with other federal agencies. Furthermore, many reported coordinating their efforts with states, and most also reported coordinating their efforts with industry. Coordination is important because, as we have previously reported, it can preserve scarce funds and enhance the overall effectiveness of federal efforts. We also have previously reported that coordination across programs may help address fragmentation, overlap, and duplication. We found that coordination activities and mechanisms among DOE, DHS, and FERC were consistent with key practices we have previously identified that can help enhance and sustain federal agency coordination, such as (1) defining and articulating a common outcome; (2) establishing joint strategies, which helps align activities, core processes, and resources to accomplish a common outcome; (3) leveraging resources, which helps obtain additional benefits that would not be available if agencies or offices were working separately; and (4) developing mechanisms to monitor, evaluate, and report on results. We analyzed and grouped into seven categories the various coordination activities and mechanisms that agency officials reported in their questionnaire responses. These categories and examples of specific activities are: Participating in formal coordinating bodies. Agency officials reported participating in several formally established coordinating bodies. In particular, DOE and DHS officials identified the Electricity Subsector Coordinating Council and the Energy Sector Government Coordinating Council as key mechanisms that help coordinate grid resiliency efforts across federal agencies and with states and industry stakeholders. According to the Electricity Subsector Coordinating Council’s charter, the council’s purpose includes coordinating activities and initiatives designed to improve the reliability and resilience of the electricity subsector, including the electricity grid, and serving as the principal liaison between the council’s membership and the Energy Sector Government Coordinating Council. The Energy Sector Government Coordinating Council is the government counterpart of the Electricity Subsector Coordinating Council, and its purpose is to enable interagency and cross-jurisdictional coordination on planning, implementing, and executing resilience programs for the nation’s critical energy infrastructure. Agency officials told us that federal grid resiliency efforts and their results are discussed at meetings of these two councils as a way to share information, coordinate efforts, and avoid duplication. We have previously found that federal programs that contribute to the same or similar results should collaborate to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. Contributing to federal planning efforts. Agency officials reported contributing to federal plans and reviews that addressed grid resiliency gaps and priorities. For example, DOE and DHS officials said they contributed to the development of the 2015 Quadrennial Energy Review, which, among other things, assessed the vulnerabilities of the electricity grid and recommended ways to enhance its resilience. Agency officials also told us that they collaborated on the development of the 2015 Energy Sector-Specific Plan, which identified three federal priorities for enhancing the security and resilience of the electricity grid. We have previously reported that it is important for collaborating agencies to establish strategies that work in concert with those of their partners or that are joint in nature, because such strategies help align the agencies’ activities to accomplish a common outcome. Maintaining a record of federal, state, and industry efforts. DOE officials reported that the agency maintains a record of federal-, state-, and industry-critical energy sector infrastructure programs and initiatives; this record includes federal grid resiliency efforts. Officials told us that they update the record, which was created in 2013, as new programs and initiatives are identified at meetings of the Electricity Subsector Coordinating Council and the Energy Sector Government Coordinating Council. DOE officials said that they use the record as an internal tool for tracking energy-sector programs and initiatives and as a means to share information about those efforts with federal, state, and industry stakeholders, as needed. We have previously found that it is important for federal agencies engaged in collaborative efforts to create the means to monitor and evaluate their efforts. Furthermore, we have concluded that developing and maintaining a record of federal efforts with similar goals can improve visibility over the full range of those efforts and reduce the potential for duplication. Participating in formal joint efforts. Some agency officials reported that their grid resiliency efforts were joint efforts with other federal agencies or industry partners. For example, DHS officials reported that both the Resilient Electric Grid Program and the Recovery Transformer Program were jointly funded by DHS and industry under formal agreements. Similarly, DOE officials reported that the Cybersecurity Risk Information Sharing Program was a formal joint effort of DOE, the federal intelligence community, and NERC. We have previously found that by leveraging partner resources, agencies can obtain additional benefits that would not be available if they worked separately. Soliciting input from stakeholders. Some agency officials reported formally soliciting input on their grid resiliency efforts from federal, state, and industry stakeholders. For example, FERC officials reported that they formally seek comments on proposed reliability standards and routinely receive comments from federal, state, and industry stakeholders. FERC officials said that the agency considers these comments when determining whether to approve a reliability standard and, as a result of these comments, in some cases directs NERC to make changes in proposed standards. Similarly, DOE officials responsible for the Strategic Transformer Reserve effort reported seeking input from relevant federal agencies—including DHS, the Department of Defense (DOD), and FERC—states, industry, and others as they developed their analysis. Sponsoring and participating in conferences, webinars, and workshops. Agency officials reported sponsoring and participating in conferences, webinars, and workshops that included discussions about grid resiliency priorities and how to address those priorities among federal, state, and industry stakeholders. For example, officials who implement DOE’s Electric Distribution Grid Resilience Research and Development Program reported that they held a workshop with stakeholders to define in greater detail research and development needs related to the distribution grid’s resilience. We have previously found that collaboration can help agencies define and articulate the common federal outcome. Coordinating directly through agency staff. Agency officials also reported that agency staff responsible for grid resiliency efforts pursued a number of informal activities to directly coordinate these efforts with related federal and industry efforts; these activities included periodic meetings, telephone calls, and e-mails to coordinate and share information. We have previously reported that frequent communication among collaborating agencies is a means to facilitate working across agency boundaries. We provided a draft copy of this report to DOD, DOE, DHS, and FERC for review and comment. DOE, DHS, and FERC provided technical comments, which we incorporated as appropriate. DOD indicated it had no comments on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Defense, Energy, and Homeland Security; the Chairman of FERC; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. We identified 27 efforts across three agencies—the Department of Energy (DOE), the Department of Homeland Security (DHS), and the Federal Energy Regulatory Commission (FERC)—that aimed to enhance the resilience of the electricity grid. Tables 6, 7, and 8 provide descriptions of the efforts at each agency as agency officials reported in their responses to our questionnaire and as we identified in agency documents. Other key contributors to this report were Jon Ludwigson, Assistant Director; Stephanie Gaines; and David Marroni. Important contributions were also made by Ben Atwater, Antoinette Capaccio, Nancy Crothers, Laura Durland, Philip Farah, Cindy Gilbert, Brian Lepore, Dan Royer, Stephen Sanford, Marylynn Sergent, Maria Stattel, Barbara Timmerman, and Greg Wilshusen.
|
In light of increasing threats to the nation's electricity grid, national policies have stressed the importance of enhancing the grid's resilience—its ability to adapt to changing conditions; withstand potentially disruptive events, such as the loss of power lines; and, if disrupted, to rapidly recover. Most of the electricity grid is owned and operated by private industry, but the federal government has a significant role in promoting the grid's resilience. DOE is the lead agency for federal grid resiliency efforts and is responsible for coordinating with DHS and other relevant federal agencies on these efforts. GAO was asked to review federal efforts to enhance the resilience of the electricity grid. This report (1) identifies grid resiliency efforts implemented by federal agencies since 2013 and the results of these efforts and (2) examines the extent to which these efforts were fragmented, overlapping, or duplicative, and the extent to which agencies had coordinated the efforts. GAO reviewed relevant laws and guidance; identified a list of federal grid resiliency efforts; sent a questionnaire to officials at DOE, DHS, and FERC to collect information on each effort and its results; analyzed questionnaire responses and agency documents to assess whether federal efforts were fragmented, overlapping, or duplicative and how agencies coordinated those efforts; and interviewed agency officials and industry group representatives. This report contains no recommendations. DOE, DHS, and FERC provided technical comments, which GAO incorporated as appropriate. The Department of Energy (DOE), the Department of Homeland Security (DHS), and the Federal Energy Regulatory Commission (FERC) reported implementing 27 grid resiliency efforts since 2013 and identified a variety of results from these efforts. The efforts addressed a range of threats and hazards—including cyberattacks, physical attacks, and natural disasters—and supported different types of activities (see table). These efforts also addressed each of the three federal priorities for enhancing the security and resilience of the electricity grid: (1) developing and deploying tools and technologies to enhance awareness of potential disruptions, (2) planning and exercising coordinated responses to disruptive events, and (3) ensuring actionable intelligence on threats is communicated between government and industry in a time-sensitive manner. Agency officials reported a variety of results from these efforts, including the development of new technologies—such as a rapidly-deployable large, high-power transformer—and improved coordination and information sharing between the federal government and industry related to potential cyberattacks. Federal grid resiliency efforts were fragmented across DOE, DHS, and FERC and overlapped to some degree but were not duplicative. GAO found that the 27 efforts were fragmented in that they were implemented by three agencies and addressed the same broad area of national need: enhancing the resilience of the electricity grid. However, DOE, DHS, and FERC generally tailored their efforts to contribute to their specific missions. For example, DOE's 11 efforts related to its strategic goal to support a more secure and resilient U.S. energy infrastructure. GAO also found that the federal efforts overlapped to some degree but were not duplicative because none had the same goals or engaged in the same activities. For example, three DOE and DHS efforts addressed resiliency issues related to large, high-power transformers, but the goals were distinct—one effort focused on developing a rapidly deployable transformer to use in the event of multiple large, high-power transformer failures; another focused on developing next-generation transformer components with more resilient features; and a third focused on developing a plan for a national transformer reserve. Moreover, officials from all three agencies reported taking actions to coordinate federal grid resiliency efforts, such as serving on formal coordinating bodies that bring together federal, state, and industry stakeholders to discuss resiliency issues on a regular basis, and contributing to the development of federal plans that address grid resiliency gaps and priorities. GAO found that these actions were consistent with key practices for enhancing and sustaining federal agency coordination.
|
The Housing and Community Development Act of 1974 combined seven categorical programs to form the CDBG program. The objective of the program is to develop viable urban communities by providing decent housing and a suitable living environment and expanding economic opportunities, principally for persons of low and moderate income. Program funds can be used on housing, economic development, neighborhood revitalization, and other community development activities. As shown in figure 1, CDBG appropriations have fluctuated over time. After funds are set aside for special purposes such as the Indian CDBG program and allocated to insular areas, the annual appropriation for CDBG formula funding is split so that 70 percent is allocated among eligible metropolitan cities and counties (referred to as entitlement communities) and 30 percent among the states to serve nonentitlement communities. Entitlement communities are (1) principal cities of metropolitan areas, (2) other metropolitan cities with populations of at least 50,000; and (3) qualified urban counties with populations of at least 200,000 (excluding the population of entitled cities). Currently, 1,128 entitlement communities receive CDBG funds, which is up from 866 entitlement communities in fiscal year 1990; 50 states also receive CDBG funds. HUD distributes funds to entitlement communities and states based on the higher yield from one of two weighted formulas that consider factors such as population, poverty, housing overcrowding, the age of the housing, and any change in an area’s growth in comparison with that of other areas. HUD ensures that the total amount awarded is within the available appropriation by reducing the individual grants on a pro rata basis. Entitlement communities may carry out activities directly or may award funds to subrecipients to carry out agreed-upon activities. Subrecipients can be governmental agencies such as public housing authorities or park districts; private nonprofits such as private social service agencies, community development corporations, or operators of homeless shelters; and certain private, for-profit entities that facilitate economic development. Whenever an entitlement community uses a subrecipient, it must enter into a signed, written agreement with that subrecipient that includes a statement of work—which describes the work to be performed, the schedule for completing the work, and the budget—and the recipient’s recordkeeping and reporting requirements. Every activity funded by entitlement communities and states must meet one of three national program objectives. Activities undertaken must (1) principally benefit low- and moderate-income persons, (2) aid in the prevention or elimination of slums or blight, or (3) meet urgent community development needs. Recipients must use at least 70 percent of their funds for activities that principally benefit low- and moderate-income people over a period of 1, 2, or 3 years, as specified by the recipient. Generally, an activity is considered to principally benefit low- and moderate-income people if 51 percent or more of those benefiting meet the definition. However, the CDBG statute includes an exception that enables certain entitlement communities to utilize CDBG funds for “area benefit activities” in census tracts having a low- and moderate-income population of less than 51 percent. Area benefit activities are activities that benefit all of the residents in a particular geographic area, such as a park, community center, or streets. Entitlement communities that may utilize this exception are those that have a limited number of census tracts with a majority low- and moderate-income population, and the exception extends to the 25 percent of census tracts within the entitlement community’s boundaries having the highest percentages of low- and moderate-income persons. Recipients can only use their CDBG funds on 26 eligible activities. For reporting purposes, HUD classifies these eligible activities into eight broad categories, as defined in table 1. Some of the activities that can be funded, such as loans for housing rehabilitation, generate program income for recipients that must be used to fund additional activities. There are statutory limitations on the amounts that recipients may spend in two specific areas. Pursuant to provisions in annual appropriations laws, recipients may only use up to 20 percent of their annual grant plus program income on planning and administrative activities. Recipients may also only use up to 15 percent of their annual grant plus program income on public service activities. Entitlement communities comply with these requirements by limiting the amount of funds they obligate for these activities during the program year, while states limit the amount they spend on these activities over the life of the grant. Recipients must submit a strategic plan that addresses the housing, homeless, and community development needs in their jurisdictions at least once every 5 years. The plan covers CDBG and three other formula grant programs administered by HUD—the HOME Investment Partnerships (HOME) Program, the Emergency Shelter Grants (ESG) Program, and the Housing Opportunities for Persons with AIDS (HOPWA) Program. Annually, recipients must submit an action plan that identifies the activities they will undertake to meet the objectives in their strategic plans. At the end of each year, recipients must submit to HUD an annual performance report detailing progress they have made in meeting the goals and objectives outlined in their strategic and action plans. HUD staff use detailed checklists to review recipients’ strategic and annual actions plans as well as their annual performance reports. HUD’s Office of Community Planning and Development (CPD) administers the CDBG program through program offices at HUD headquarters and 42 field offices located throughout the United States. The headquarters offices set program policy, while staff in the 42 field offices monitor recipients. Each field office is headed by a CPD director. CPD has a total authorized staff of approximately 800—about 200 at headquarters and 600 in the field. CPD field offices are responsible for a broad range of grant management activities that include annual review and approval of entitlement grantee action plans, preparation and execution of grant agreements, review of entitlement grantee annual performance reports, managing homeless program competition to include reviewing over 4,500 applications; preparing conditional award letters; reviewing and approving technical submission for conditionally approved grants; setting up budgets for each grant in Line of Credit Control System; executing grant agreements and grant closeout activities, providing technical assistance to entitlement and competitive grantees, and recapturing unobligated/unexpended grant funds, as well as monitoring activities. In September 2005, CPD issued a new monitoring handbook. The handbook states that monitoring is an integral management control technique and that the goal of monitoring is to determine compliance, prevent/identify deficiencies, and design corrective actions to improve or reinforce program participant performance. It contains two chapters on monitoring the CDBG program, and these chapters include 29 exhibits for field office staff to use when monitoring CDBG recipients. HUD staff use two major information systems to monitor the use of CDBG funds—IDIS and GMP. Developed in fiscal year 1996, IDIS is a management information system that consolidates planning and reporting processes across HUD’s four formula grant programs. The recipients use this system to enter information on their plans, establish projects and activities to draw down funds, and report accomplishments. The GMP system, created in fiscal year 1997, records information such as HUD’s monitoring of recipients, provision of technical assistance, and review of recipients’ plans and performance reports. The system is designed for use by HUD staff to ensure that funds are being expended properly and to provide information on recipient progress. In April 1999, we issued a report on HUD’s oversight of CDBG and CPD’s three other formula grant programs. At that time, we found that HUD’s monitoring did not ensure that the programs’ objectives were being met or that recipients were managing their funds appropriately. We also noted that IDIS did not provide the information necessary to accurately assess recipients’ performance and thus did not compensate for HUD’s breakdowns in monitoring. Specifically, we reported that IDIS (1) provided ample opportunity for major problems with data entry and did not allow such problems to be corrected easily, (2) did not provide timely and accurate information, and (3) had difficulty producing reports. Because of the actions HUD took in response to our recommendations in this report, we removed CPD’s programs from our high-risk list in 2001. We have issued standards for internal control in government that agencies should follow. Internal control helps government program managers achieve desired results through effective stewardship of public resources. Internal control standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. Two of the standards are (1) risk assessment, where risks are identified and analyzed for their possible effect and (2) monitoring, which assesses the quality of performance over time and ensures that findings are promptly resolved. Another standard is control activities that help ensure that management’s directives are carried out. Examples of such activities include managing an organization’s workforce and establishing and reviewing performance measures and indicators. While the data that HUD collects on CDBG expenditures show that CDBG recipients fund a variety of activities, HUD does not centrally maintain the data needed to determine if recipients are complying with statutory spending limits. According to HUD’s data, CDBG recipients spend the largest percentage of their funding on public improvements and housing activities. Further, recipients report that the vast majority of activities they fund meet the national objective of principally benefiting low- and moderate-income persons; however, 359 recipients are currently eligible for an exception that allows them to expand the definition of low- and moderate-income areas. There are statutory spending limits on public services and administration and planning, but HUD’s information systems do not maintain all the data needed to determine the extent of compliance with these limits. Finally, HUD has implemented a new performance measurement system to improve its ability to obtain consistent data on accomplishments attained with CDBG funds. CDBG recipients spend the largest percentage of their funding on public improvements and housing activities. In fiscal year 2005, recipients spent about $4.8 billion in CDBG funds to address a wide range of local needs. Approximately $508 million (or 10 percent) of these total expenditures were from program income generated by previous CDBG activities. As shown in figure 2, CDBG recipients spent 32 percent of their total funds on public improvements and 25 percent on housing in fiscal year 2005. Within the category of public improvements, recipients spent the largest percentage of their funds on water and sewer improvements. Under the housing category, the single activity that received the most funding was single-unit residential rehabilitation. Although both entitlement communities and states devote large amounts of funding to public improvements, figure 3 shows some differences in how they use their CDBG funds. Entitlement communities spend the largest percentage of their CDBG funds on housing activities. In fiscal year 2005, entitlement communities spent 27 percent of their CDBG allocations on housing activities, followed by 24 percent on public improvements, and 17 percent on administration and planning activities. In contrast, states distribute over half of their CDBG funds to public improvements. In fiscal year 2005, states distributed 54 percent of their funds to public improvements, 17 percent to housing activities, and 15 percent to economic development. States and entitlement communities use a similar process to identify CDBG needs, but states also have to determine how they will distribute their funds to nonentitlement communities. How states choose to distribute their funds varies from state to state. For example, Georgia distributes most of its CDBG funding through a competitive process that funds the best projects regardless of activity type. Colorado also uses a competitive process, but it distributes a third of its CDBG funding to housing, a third to business financing, and a third to public facilities and community development. Pennsylvania distributes most of its CDBG funding using a formula method. For examples of how selected states distribute their CDBG funds, see appendix II. As a result of the flexibility inherent in the CDBG program, the types of activities that entitlement communities and states fund within each broad category vary considerably. During our site visits, many of the recipients we interviewed stated that the flexibility afforded by CDBG was one of the program’s strengths. Figure 4 illustrates the variety of activities funded by the recipients we visited. More detailed descriptions of various activities funded by the recipients we visited can be found in appendix III. Of the three national program objectives, recipients report that the vast majority of activities they fund under the CDBG program meet the objective of principally benefiting low- and moderate-income persons. However, some recipients can use different criteria when defining low- and moderate-income areas. As shown in figure 5, entitlement communities reported that 91 percent of the activities they funded in fiscal year 2005 principally benefited low- and moderate-income persons; states reported that 96 percent of their activities met this national objective. The remaining activities funded, excluding activities coded for administration and planning, sought to eliminate slums or blight, addressed urgent needs, or were missing a national objective code. For the small percentage of activities missing a code, we could not determine which national objective was met. According to a HUD official, activities missing a national objective code were also administration and planning activities, however, we could not verify this statement based on our analysis of HUD’s IDIS data. A special statutory exception allows certain entitlement communities to count activities that benefit fewer than 51 percent low- and moderate-income people as meeting the corresponding national objective. This exception allows the recipient to use the first 25 percent of all census tracts in its jurisdiction to qualify as meeting the national objective. For example, if a city or county consists of 40 census tracts, only 4 of which contain 51 percent or more low- and moderate-income persons, that recipient can also consider the 6 census tracts with the next highest percentages of low- and moderate-income persons as low- and moderate-income census tracts. Currently, 359 of the 1,128 cities and urban counties that receive CDBG funds are eligible to use this exception. These recipients’ exception percentages range from a high of 50.9 percent to a low of 18.5 percent. The exception percentage indicates the minimum percentage of low- and moderate-income people that must live in an area for an activity funded in that area to meet the low- and moderate-income national objective. As shown in figure 6, the majority of recipients eligible for the exception had an exception percentage higher than 40 percent; 39 CDBG recipients had a percentage less than 30 percent. Although a recipient is eligible to use this exception, it may not take advantage of it for all of the activities it funds. First, the exception only applies when the activity—such as a park, community center, or streets—serves an identified geographic area. Many activities, such as public services, benefit low- and moderate-income people, not an area. Also, in cases where the recipient has both areas that contain a majority of low- and moderate-income people and areas that qualify for the exception, it may choose to fund only activities that are in its areas with a majority of low- and moderate-income people. HUD does not centrally maintain the data needed to determine if entitlement communities and states are complying with the statutory spending limits on public services and administration and planning. Information on manual adjustments needed to determine compliance can be obtained from the field offices but is not readily available. By law, CDBG recipients may only use up to 15 percent of their funds on public service activities and up to 20 percent on administration and planning. We attempted to use the data in IDIS to assess each entitlement community’s compliance with these spending limits but determined that certain manual adjustments that are needed to complete the calculations are not saved in IDIS. Entitlement communities enter these manual adjustments into IDIS at the end of each program year for the sole purpose of creating financial summary reports that show, among other things, the two spending limit calculations. Entitlement communities include these reports in the annual performance reports that they submit to HUD’s 42 field offices for review. After they are prepared, the reports are saved in HUD’s mainframe computer for only 5 days due to limited system capacity. With respect to determining state compliance, data are even more limited. IDIS does not currently generate reports that show the spending limit calculations for states. The calculations that are used to determine an entitlement community’s compliance do not work for states because a state’s compliance is determined based on the percentage of each grant that is spent on public services and administration and planning instead of the percentage of each program year’s obligations, as is the case for entitlement communities. Therefore, according to the HUD official that heads the state CDBG program, field staff currently determine compliance with the spending limits during on-site monitoring and when grants are fully spent. However, the official noted that future design enhancements to IDIS will allow HUD to more easily generate information on state compliance with these spending limits. Without a record of data adjustments needed to calculate entitlement community compliance and data on state compliance, HUD cannot provide timely assurance that recipients are adhering to the spending limits. For example, when information on compliance with the administrative and planning spending limit was recently requested from HUD for a House report, HUD could not provide data that directly addressed the request. The agency provided the data that were readily available but noted that the data could not be used to determine compliance with the spending limit. HUD stated that it would have to collect additional information on certain manual adjustments to give the committee a more accurate picture of compliance with the limit. In the absence of centralized data on all recipients, we requested that HUD contact its field offices to provide data on the extent to which the 100 most populous entitlement communities had complied with the statutory spending limits in program year 2004. These entitlement communities received about one-third of the CDBG funds allocated in fiscal year 2006. Our analysis of the limited data showed that not all of these entitlement communities complied with the statutory spending limits. Of the 100 entitlement communities, 3 exceeded their public service spending limit, and 1 exceeded the administration and planning spending limit. HUD could not provide similar data on the extent to which individual states have complied with the spending limits because, as described earlier, IDIS does not generate reports that track state compliance with the limits. According to the head of the state CDBG program, compliance with the limits has never really been a concern for states because they collectively spend well below the statutory maximums. Some recipients are allowed, due to a special provision, to use more than 15 percent of their funds for public services. By law, entitlement communities that used in excess of 15 percent of CDBG funds received for public service activities in fiscal year 1982 or 1983 are allowed to continue to use the higher of the actual dollar amount or percentage of assistance in either of those years. Due to this provision, a total of 41 entitlement communities are allowed to use more than the 15 percent they would have been allowed if they were subject to the cap. For example, the city of Chicago, Illinois is allowed to use $41 million (48 percent of its fiscal year 2006 allocation) for public services. The city of Seattle, Washington is allowed to use about 36 percent of its CDBG funds for public services. Congress has also authorized temporary exceptions to the spending limit when warranted by events affecting a specific community. These temporary exceptions are for a limited period of time, such as 5 years, and a limited amount, such as up to 25 percent of their grant amount, unless extended by law. For instance, the city and county of Los Angeles were allowed to exceed the limit for a set period of time in the aftermath of the 1992 Los Angeles civil unrest. Also, in September 2005, HUD issued a suspension of the limit to enable CDBG recipients to utilize CDBG funds to address emergency expenses associated with the needs of Hurricane Katrina evacuees. The expenses subject to the spending limit on administration and planning do not reflect all of the staff and overhead costs being funded with CDBG. CDBG recipients are allowed by regulation to incorporate into individual activity budgets delivery costs such as architectural and engineering expenses, legal expenses, insurance, permit fees, taxes, and similar expenses if such expenses are directly attributable or integral to carrying out an eligible activity. These expenses are not counted toward the 20 percent administrative and planning spending limit. With the exception of housing rehabilitation administration and code enforcement, HUD does not track staff costs charged to various eligible activities. In fiscal year 2005, CDBG recipients spent $153 million on housing rehabilitation administration and $133 million on code enforcement—about 6 percent of total expenditures. While funds charged to planning and administration are presumed to meet the program’s national objectives, HUD requires recipients to document that any staff or overhead costs charged to other eligible activities meet a national objective. HUD has established a new performance measurement system to better track accomplishments achieved with CDBG funds. IDIS currently contains data on CDBG-funded accomplishments, but the data are incomplete and inconsistent. First, HUD has not always required recipients to enter accomplishment data; therefore, data on the older projects are incomplete. Second, recipients report data differently. For example, some CDBG recipients report the number of persons served by a CDBG funded activity, while other recipients report the number of times a service is provided. In an effort to address these problems, HUD began verifying the accuracy of CDBG accomplishment data in 2004. To ensure complete and accurate data, HUD periodically reviews the data that recipients enter into IDIS for inconsistencies, inaccuracies, and omissions. HUD then gives the recipients feedback by placing spreadsheets on the Web for each recipient that indicate the fields in IDIS that need correction. To further track program accomplishments, HUD has developed a new performance measurement system for the CDBG program. In March 2006, HUD published performance measures developed in conjunction with a working group comprised of community development organizations. They undertook this effort in reaction to an OMB finding that the CDBG program was unable to demonstrate results at the national level. HUD’s new outcome performance measurement system has three objectives: (1) creating suitable living environments, (2) providing decent affordable housing, and (3) creating economic opportunities. Under these broad objectives, there are three outcomes: (1) availability and accessibility, (2) affordability, and (3) sustainability. The specific outcome indicators that HUD will track include the number of persons assisted by a public service activity, number of housing units rehabilitated, and number and types of jobs created. Recipients could start entering the new performance measurement data in May 2006. To help recipients implement the new performance measurement system, HUD has scheduled 15 regional training sessions that will provide information to recipients on performance measurement principles and the new outcome framework. The first session was held in May 2006, and the last session is scheduled for August 2006. According to HUD, the training sessions will (1) provide information about how recipients can implement the outcome indicators through their local and state procedures for data collection and reporting and (2) discuss entry of the performance data into IDIS. The agenda topics include data quality and how to measure the outcome of various activities such as housing and economic development. For this training, HUD has developed a training manual and guidebook that contains information on measuring outcomes achieved with CDBG funds. The department has made these materials available to all recipients on its Web site. At the close of our review, these activities were too new to assess their effectiveness. While HUD has implemented a risk-based monitoring strategy for the CDBG program, it has not developed a plan to ensure that it has enough staff with the skills needed to conduct monitoring or fully involved its field staff in plans to redesign IDIS, an information system they use to monitor recipients. Consistent with our internal control standards, HUD has established a risk assessment process to identify CDBG recipients for review. HUD’s monitoring strategy calls for its field offices to consider various risk factors when determining which recipients to review because it has limited monitoring resources, and its workload has increased as its staffing levels have decreased. For example, 13 of the 42 field offices overseeing CDBG recipients do not have a financial specialist, and 39 percent of its field staff is eligible to retire within the next 3 years. Despite these statistics, HUD has not developed a plan to hire staff with needed skills or help manage upcoming retirements. Finally, although IDIS is one of the tools that HUD field staff use to monitor recipients, HUD headquarters has solicited little input from them on efforts to redesign IDIS. HUD’s monitoring of the CDBG program focuses on high-risk recipients. Each year, CPD sets a formal monitoring goal. Its goal in fiscal year 2005 was for CPD as a whole and each of its field offices to monitor a minimum of 20 percent of their formula and competitive recipients. According to the HUD official who set the goal, he set it at 20 percent based on the need to balance government stewardship with available resources, including staff and travel funds. With a 20 percent goal, he noted that it would be conceivable that every recipient would be monitored over a period of 5 years. Overall, CPD met its monitoring goal for fiscal year 2005. CPD’s goal was to review 942 recipients, and it completed 977 reviews. Of the 977 reviews, 349 were CDBG reviews. However, two individual field offices did not monitor 20 percent of their recipients. As shown in table 2, CPD has monitored more than 20 percent of its CDBG recipients in recent years. HUD’s monitoring policy calls for HUD staff to focus on high-risk recipients when selecting CDBG recipients for review. Consistent with our internal control standards, HUD has developed a formal risk analysis process for its field offices to use when determining which recipients to review. Field office staff rate recipients on various factors that fall under the following four categories: financial, management, satisfaction, and services. The staff total the scores from each factor and assign recipients a final score on a 100-point scale. At each field office, a CPD management representative then conducts a review to ensure the validity and consistency of the scores. HUD considers recipients that receive a score of 51 or greater to be high risk; it considers those with a score of 30 to 50 to be medium risk; and those with less than 30 it considers to be low risk. Recipients that receive a high-risk rating are subject to monitoring, unless a management representative approves an exception. CPD management representatives can approve an exception if (1) the HUD Office of Inspector General is auditing the recipient; (2) they determine that monitoring is administratively infeasible in the current year, given other monitoring actions; or (3) they have other reasons—such as HUD recently monitored the recipient or it monitored another program administered by the recipient. Field office staff must review high-risk recipients on site unless they reviewed them on site in the last 2 years, and the purpose of the monitoring is to validate the implementation of corrective actions. Medium- and low-risk recipients can be reviewed using remote, or off-site, monitoring. Our review of data from GMP—the system that field staff use to record the results of the risk analysis process and any monitoring performed—showed that HUD’s field offices followed the risk analysis process in all but 16 cases in fiscal year 2005. For fiscal year 2005, HUD designated 164 recipients as high risk. Out of these 164 recipients, GMP data showed that 107 were monitored, 41 were granted an exception, and 16 were not monitored or provided an exception. The risk scores assigned to the 16 high-risk recipients that HUD did not monitor or provide an exception ranged from a low of 51 to a high of 80. These 16 recipients received allocations totaling about $145 million in fiscal year 2005. They included Detroit, Michigan ($43 million), Oregon ($16 million), and Honolulu, Hawaii ($11 million). According to HUD, these recipients were not monitored or granted an exception either because its field staff misunderstood the exception requirements or the field office responsible for monitoring the recipient experienced a staffing shortfall. However, despite not monitoring these 16 high-risk recipients, 8 of the 12 responsible field offices monitored recipients that they did not consider high risk. Further, we found that HUD reviewed most, but not all, CDBG recipients at least once in the 5-year period from fiscal year 2001 through fiscal year 2005. As shown in table 3, our analysis of GMP data showed that HUD monitored all but 255 recipients in fiscal years 2003 through 2005. These 255 recipients received about $525 million in fiscal year 2005 funding. When we expanded our analysis to 4 years (fiscal years 2002 through 2005), we determined that HUD had monitored all but 140 recipients that received a total of about $239 million in fiscal year 2005. During the 5-year period from fiscal year 2001 through 2005, HUD did not monitor 84 recipients that received a total of about $132 million in fiscal year 2005. Monitoring recipients is critical because it often results in findings. During our site visits, we reviewed 144 recipient files and found documentation problems. For example, 24 of the 144 files we reviewed did not contain sufficient documentation to show that the activity met one of the three national objectives, as required by the program. Another 14 files did not note which national objective the activity was supposed to meet. Additionally, 46 files we reviewed showed no evidence of monitoring by the recipient. In contrast, recipients we visited tended to have signed agreements with their subrecipients as required by program regulations. Of the 90 cases that involved a subrecipient, 87 files contained a signed subrecipient agreement, and 76 of the 87 agreements contained the five required elements we tested. When HUD reviews files during its monitoring, it finds similar occurrences. Fifty-seven percent of HUD’s fiscal year 2005 reviews resulted in at least one finding. In total, HUD’s fiscal year 2005 monitoring resulted in 581 findings and 447 concerns. Examples of cited findings included not documenting a national objective, funding an ineligible activity, poor recordkeeping, and incomplete subrecipient requirements. HUD employs a risk-based monitoring approach because it has limited staff and travel funds to devote to CDBG monitoring. CPD’s staffing levels have decreased nationwide, as its CDBG workload has increased. From fiscal year 1993 to the beginning of fiscal year 2006, the number of CPD field office staff decreased from 751 to 599, a decline of 20 percent. During the same time period, the number of entitlement communities grew from 889 to 1,128, an increase of 27 percent. This increase in workload has had a greater effect on certain CPD field offices. As of February 2006, the average number of CDBG recipients per program representative was nearly four. The number of recipients per representative exceeded this average at 20 of the 42 CPD field offices and was six or more at three offices. The HUD official responsible for CPD field office staff told us he would like to have more staff but has to get the work done with what he has. Additionally, CPD program representatives in their role as program monitors oversee other HUD programs, including three other formula grant programs, homeless programs, and a number of smaller competitive grant programs. For example, at the Chicago CPD office, each representative monitors four to six formula grants, as well as approximately 100 competitive grants. Although they represent fewer dollars than the formula grant programs, the competitive grant programs require more monitoring, according to CPD program managers. The programs are generally administered by small nonprofit organizations that experience a large amount of staff turnover. Further, there were 9,705 active competitive grants in fiscal year 2005. A number of CPD field offices also do not have a financial analyst. Financial analysts are important because they evaluate the financial operations of each recipient and ensure that CPD’s monitoring activities adequately address any financial vulnerabilities in CPD programs and related capacity concerns. They help field offices review budget submissions, financial report submissions, independent audit reports, and drawdown requests. As of late April 2006, 13 of the 42 CPD field offices did not have a financial analyst. These 13 field offices averaged a CDBG portfolio of $60 million. In offices we visited that did not have a financial analyst, other staff assumed some of the responsibilities of a financial analyst, but these staff had other responsibilities as well and lacked the specialized skills of a financial analyst. Staffing shortages may worsen in the future because many current CPD field staff are eligible to retire. As of February 2006, 39 percent of CPD field staff was eligible to retire within the next 3 years. If we include those eligible for early retirement, the percentage increases to 59 percent within the next 3 years. For example, the four officials we interviewed in the Milwaukee field office told us that they were all currently eligible to retire, including the CPD Director. Denver field office officials told us that the office could lose all but one of its program representatives to retirement in the next 5 years. HUD has not developed a plan to hire staff with needed skills, such as financial analysts, or to help CPD manage upcoming retirements. Our internal control standards state that agencies, as part of their human capital planning, should consider how to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. According to a HUD official, HUD has taken a number of steps to manage its CPD workforce, such as hiring interns and implementing a leadership development program. However, these efforts do not specifically address the need to hire financial analysts and replace the staff that will become eligible for retirement in the next few years. According to internal control standards, an agency should have a specific and explicit workforce planning strategy that allows for identification of current and future human capital needs and a formal recruiting and hiring plan with explicit links to skill needs the agency has identified. HUD internal reviews and the HUD Inspector General have also noted that limited staffing has negatively impacted CPD’s monitoring. In fiscal year 2004, 11 of the 12 internal management reviews, known as Quality Management Reviews, performed at CPD field offices noted staffing issues. For example, the reports noted that one office might not meet its monitoring goals due to significant loss of staff and that staff at two offices had an unbalanced workload. Also, one report noted that the field office needed a financial analyst for oversight of $113 million in CPD program funds. Furthermore, in a June 2004 report on CPD management controls, the HUD Inspector General observed that reductions in field office staffing levels had impacted CPD’s monitoring capabilities. The report noted that, between 1993 and 2003, CPD had been negatively impacted by staffing challenges that had plagued all of HUD. Further, according to HUD field office staff, limited travel budgets have affected their ability to monitor CDBG recipients. For fiscal year 2005, the travel budget for all 42 CPD field offices was about $392,000. The travel budgets for the six field offices we visited ranged from $2,528 in Baltimore (11 CDBG recipients) to $20,691 in Los Angeles (105 CDBG recipients). Some field office staff told us their travel budgets affect which recipients they select for on-site monitoring during the risk assessment process. For example, they will limit their monitoring of recipients that require a high cost of travel. They will either conduct off-site monitoring or document an exception, which allows them to monitor these recipients less often despite their risk analysis score. Additionally, when monitoring recipients, field office staff sometimes shorten their visit to fit within their travel budget. In its June 2004 report, the HUD Inspector General reported similar findings and added that field offices will also reduce the amount of staff participating in a monitoring visit in order to reduce travel costs. According to the headquarters official that manages CPD’s field offices, he has to balance the travel needs of all 42 field offices when allocating limited travel funds. To help the field offices better plan their travel, he has begun providing them with quarterly, rather than monthly, allocations of funds. HUD is currently redesigning IDIS but has solicited limited input from its field staff. IDIS, a tool that HUD field staff use to conduct on-site and off-site monitoring, has shortcomings that limit its usefulness as a monitoring tool. IDIS was designed to be a real-time information system providing financial disbursement, tracking, and reporting functions for CPD. In our April 1999 report, we noted that the system was not providing needed information, and our current work indicates that, despite HUD improvements to the system, it still is not providing all the information needed to monitor recipients’ performance. During our site visits, field office staff noted that (1) the data in IDIS are not always current because some recipients do not update it quarterly, as HUD recommends and (2) the accomplishment data in IDIS are not as reliable as the financial data. As previously noted, HUD is currently working with recipients to improve the quality of the accomplishment data. Also, a HUD headquarters official noted that HUD plans to add reports that will better assist field staff with their monitoring. Similarly, in a report on how to incorporate performance measures into IDIS, the National Academy of Public Administration found that (1) IDIS allows data input errors and omissions, (2) the ability to manipulate data for reporting purposes is limited, and (3) HUD staff and recipients have expressed frustration with using the system. To improve the usefulness of IDIS, HUD is currently reengineering the system. The department has obligated $9.4 million for development of the new system. One problem with the initial development of IDIS was that HUD did not adequately consider input from end users. HUD has attempted to address this problem in the statement of work for the new system by stating that the contractor should gather requirements from HUD stakeholders and recipients. Specifically, the contractor was to work with HUD’s field offices to identify issues with the current accomplishment reporting, hold sessions with both field office staff and recipients to solicit user requirements pertaining to reports, and develop a draft prototype to solicit HUD stakeholder and recipient feedback on proposed navigation approaches. Soliciting input from end users on their requirements is consistent with best practices for system development. Our guidance on information technology investment management states that (1) investment control processes should ensure that key customers and business needs for each project are identified and that the users are engaged in this process and (2) users should participate in project management throughout a project’s or system’s life cycle to ensure that it supports the organization’s business needs and meets users’ needs. Contrary to the IDIS statement of work and our guidance on information technology investment management, HUD headquarters and its contractor have solicited little input from field staff. As the HUD staff tasked with monitoring CDBG recipients, field staff are the users that rely most heavily on IDIS as a monitoring tool. Although HUD headquarters and the contractor have held only one session with field staff, they have already drafted a document outlining the system’s functional requirements. According to the HUD official that is overseeing development of the new system, the one session held with field staff was unproductive; therefore, they plan to wait until they are making decisions regarding the standard reports that the system will generate to solicit additional input from field office staff. If HUD’s plans to involve its field staff in efforts to improve IDIS are limited to soliciting input regarding the new system’s reporting capabilities, the other factors that have limited IDIS’ effectiveness as a monitoring tool may not be addressed. Although HUD has issued a clear policy stating what actions it will take when entitlement communities fail to meet the statutory requirement that funds be spent in a timely manner, it has not developed similar guidance establishing a consistent framework for holding CDBG recipients accountable for deficiencies identified during monitoring. Because federal law requires HUD to ensure timely expenditure of entitlement funds, HUD has set a timeliness standard for entitlement communities and established a grant reduction policy for recipients that exceed the standard. As it monitors CDBG recipients, however, HUD has the flexibility to assess other sanctions ranging from issuing a warning letter to advising the recipient to pay back CDBG funds. HUD headquarters has not issued guidance that describes the conditions under which each type of sanction should be taken, and we found instances in fiscal year 2005 where findings that appeared to be similar were associated with different enforcement actions. By implementing a timeliness standard, HUD has reduced the number of entitlement communities that are slow to expend funds. Federal law requires HUD to review CDBG entitlement communities to determine if they have carried out their CDBG-assisted activities in a timely manner. It considers an entitlement community to be timely if, 60 days prior to the end of the recipient’s current program year, the amount of entitlement grant funds available under grant agreements but undisbursed by the U.S. Treasury was not more than 1.5 times the entitlement grant amount for its current program year. To ensure that entitlement communities comply with this standard, HUD established a grant reduction policy for untimely recipients in November 2001. The new policy stated that an untimely recipient had 1 year to become timely. If it still did not meet the 1.5 standard at the end of its next program year, HUD would reduce its next grant by how much it exceeded the standard, unless HUD determined that the lack of timely spending was due to factors beyond the recipient’s control. For example, if a recipient’s annual grant was $1 million and its 60-day ratio was 1.57, the maximum amount of the reduction would be $70,000 (0.07 times $1 million). Since the implementation of this grant reduction policy, the number of untimely entitlement communities has gone down from 140 in November 2001 to 65 as of April 2006. Of the 65 recipients that were untimely as of April 2006, 8 had a 60-day ratio above 2.0. The remaining recipients had 60-day ratios between 1.51 and 2.0. Although HUD could not provide a list showing the total number of recipients that have been untimely for only a year since the inception of the standard, it has tracked the total number that were untimely for 2 consecutive years and, therefore, subject to grant reduction. As of April 2006, 14 recipients had been subject to grant reduction. Of these 14, HUD only reduced three recipients’ funding. It granted exceptions to six recipients due to factors such as natural disasters that triggered Presidential disaster designations and did not take action against three because HUD failed to provide proper notice to the recipient when it first became untimely. The remaining two had moved under the 1.5 standard quickly, and HUD decided not to reduce their grants. When they identify deficiencies other than failing to meet the timeliness standard, HUD’s field offices have the flexibility to determine which sanctions are warranted based on the conditions identified. As shown in table 4, HUD’s monitoring of CDBG recipients during fiscal years 2003 to 2005 resulted in approximately 1,900 findings and about 350 sanctions. In fiscal year 2005, HUD assessed 95 sanctions, including about $1.6 million in financial sanctions. The sanctions that HUD may take against recipients range from issuing a letter of warning to advising recipients to reimburse their lines of credit. Beyond the program regulations that describe the purpose of taking corrective actions and the various actions that can be taken, HUD has issued no guidance to its field offices describing what conditions its field staff should consider when taking corrective actions and what specific conditions warrant different types of corrective actions. Instead, its 42 field offices have the flexibility to determine the types of sanctions for findings that they identify. According to HUD headquarters officials, field offices may call HUD headquarters for advice before taking sanctions against a recipient. Figure 7 shows that the action taken the most often during fiscal year 2005 was that of advising the recipient to alter or end an activity. Our internal control standards state that agencies should implement control activities, which are policies and procedures that enforce management’s directives and ensure accountability. One such strategy is to document the steps taken to implement internal controls. Such documentation should be clear and readily available. Contrary to these standards, HUD has not clearly documented the steps that its field offices take to determine the appropriate sanctions when deficiencies are identified during monitoring. Such guidance could establish the parameters within which field office should operate, while still allowing for consideration of individual situations. By establishing a framework within which field offices should operate, HUD headquarters could instill accountability as well as allow field staff to make individual judgments based on factors such as a recipient’s past performance and the frequency and severity of findings. In the absence of guidance, HUD’s field offices have treated recipients that committed similar infractions differently. In our meetings with several national organizations that represent CDBG recipients, representatives noted that their members have observed inconsistent interpretation of program regulations across HUD field offices. Further, we found instances where deficiencies identified in fiscal year 2005 seemed similar to us but different corrective actions were taken. Inability to support meeting a national objective: When one field office found that a recipient could not support that an activity met a national objective, it asked the recipient to provide either more documentation or a written assurance that it would not fund that type of activity in the future. In contrast, another field office advised a recipient that could not document that an activity met a national objective to reimburse its line of credit. In another instance, a field office stated that it might disallow expenditures if the recipient could not document that an activity met a national objective. Documenting environmental reviews: When one field office determined that a recipient had not documented any follow-up compliance actions for projects where mitigating measures for environmental compliance were identified, even after the office had previously identified the lack of follow up as a concern, it advised the recipient to submit documentation showing that follow-up actions had been taken. In another case where a field office determined that a recipient had failed to fully document its environmental reviews, that field office advised the recipient to suspend disbursement of funds for all activities until it put in place revised environmental review procedures and the appropriate level of environmental review had been carried out. Guidance providing HUD’s 42 field offices with a range of appropriate actions for identified deficiencies could help to provide greater transparency and accountability, and it could better ensure consistency of sanctions for similar infractions. Many communities use their CDBG funds to benefit their residents and increase the economic health of the community. One of the cited strengths of the program is its flexibility, which allows communities to make decisions locally about the best use of the funds in their community. Given the program’s flexibility, it is critical that HUD ensure that recipients use funds in a manner that is consistent with the purposes of the program. While there are statutory spending limits on public services and planning and administration, HUD does not centrally maintain the data needed to determine compliance with these spending limits in a timely manner. Entitlement communities collectively spend at or close to the limits on public services and planning and administration. Therefore, it is important for HUD to be able to report on the extent of entitlement community compliance with these limits. Without these data readily available, HUD cannot provide timely assurance that recipients are adhering to these limits. With program funding being cut as the number of grant recipients increases, it is essential for HUD to ensure that recipients use funds properly. Because it has limited monitoring resources, HUD has implemented a risk-based process to identify recipients for review. However, HUD faces challenges as it carries out these responsibilities. First, a large percentage of the field staff responsible for monitoring CDBG recipients will be eligible for retirement within the next 3 years. HUD has not developed a plan for replacing this vital program expertise. HUD has established an internship program and other initiatives to develop senior leaders, but such activities will not, in themselves, replace experienced professionals. Without such a plan, HUD has no way to ensure continuity of needed skills and abilities. Second, HUD is reengineering IDIS—the system that it relies on to monitor recipients it cannot review on-site—to address a number of shortcomings in the system, but its plans to involve HUD field staff in these efforts are limited to soliciting input regarding the new system’s reporting capabilities. If it does not fully involve all of the system’s stakeholders in the reengineering process, as it failed to do when initially developing the system, HUD runs the danger of repeating past development mistakes and having to live with a flawed system that limits its monitoring abilities. Developing a system that better meets the monitoring needs of HUD field staff has increased in importance in an environment where the number of monitoring staff is declining as the workload is increasing. While allowing for judgment and flexibility, an effective monitoring program should also make it transparent to recipients what actions may be taken if deficiencies are found. HUD has established a clear policy stating that it will reduce an entitlement community’s grant funds if it fails to spend its funds in a timely manner, and, as a result, the number of untimely recipients has dropped. However, HUD has not developed similar guidance laying out a framework of enforcement actions that may be taken when certain deficiencies are identified during monitoring, and we found instances where findings that appeared to be similar were associated with different enforcement actions. Such guidance could establish the parameters within which field offices should operate, while still allowing for flexibility to address individual situations. Issuing guidance could also help HUD’s management provide greater transparency and accountability to the sanctioning process. In order to improve HUD’s oversight of the CDBG program, we recommend that the Secretary of Housing and Urban Development direct the Assistant Secretary for Community Planning and Development to take the following four actions: Maintain in IDIS the data needed to determine compliance with the statutory limitations on expenditures for public service activities and administration and planning. Develop a plan for ensuring the proper mix of skills and abilities and replacing an aging CPD workforce. Look for additional opportunities to solicit field staff input on IDIS user requirements. Consider developing guidance for the CDBG program that details what conditions should be considered when taking corrective actions and what specific conditions warrant different types of corrective actions. We provided HUD with a draft of this report for review and comment. We received oral comments from the Office of Community Planning and Development’s Comptroller on July 12, 2006, addressing our key findings, conclusions, and recommendations. He stated that, overall, HUD agrees with our findings, conclusions, and recommendations. In addition, HUD provided a letter from the General Deputy Assistant Secretary for Community Planning and Development with comments that were technical in nature. This letter and our response to each of the comments appear in appendix IV. HUD also provided other oral technical comments that were incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Ranking Minority Member, Subcommittee on Housing and Community Opportunity, House Committee on Financial Services; Ranking Minority Member, Subcommittee on Federalism and the Census, House Committee on Government Reform; Ranking Minority Member, Subcommittee on Federal Financial Management, Government Information, and International Security, Senate Committee on Homeland Security and Governmental Affairs; and the Chairman and Ranking Minority Member, Subcommittee on Housing and Transportation, Senate Committee on Banking, Housing, and Urban Affairs. We will also send copies to the Secretary of Housing and Urban Development. Copies of this report will also be available to other interested parties upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The Chairman of the Subcommittee on Housing and Community Opportunity, House Committee on Financial Services; the Chairman of the Subcommittee on Federalism and the Census, House Committee on Government Reform; and the Chairman, Subcommittee on Federal Financial Management, Government Information, and International Security, Senate Committee on Homeland Security and Governmental Affairs requested that we review the use of Community Development Block Grant (CDBG) funds and how the Department of Housing and Urban Development (HUD) oversees the program. In particular, we examined (1) how recipients have used CDBG funds, including the extent to which they have funded activities that meet national program objectives, complied with spending limits, and reported accomplishments achieved with funds; (2) how HUD has monitored recipients’ use of CDBG funds; and (3) how HUD has held recipients that have not complied with CDBG program requirements accountable for their actions. To accomplish these objectives, we analyzed fiscal year 2005 data from HUD’s Integrated Disbursement and Information System (IDIS) and fiscal year 2001 through 2005 data from the Grants Management Process (GMP) System on all CDBG recipients. We assessed the reliability of the HUD data we used by reviewing information about the systems, performing electronic data testing to detect errors in completeness and reasonableness, and discussing the data with knowledgeable agency officials. We determined that the data were sufficiently reliable for the purposes of this report. In addition to analyzing HUD data on all CDBG recipients, we visited 20 recipients. As shown in table 5, we visited 17 recipients in six large metropolitan areas as well as 3 smaller recipients outside large metropolitan areas. In selecting the recipients located in large metropolitan areas, we considered geographic dispersion, funding level, need, and proximity to a HUD field office and state capital. We selected the smaller recipients outside large metropolitan areas based on their population and location. Of the 20 recipients we visited, 4 were states, 2 were urban counties, and 14 were cities. We also visited four nonentitlement communities funded by the states of Georgia and Maryland. We interviewed the eight HUD field offices that monitor the grantees we visited and interviewed staff at HUD headquarters. Finally, we interviewed representatives of four national organizations that represent CDBG recipients—the Council of State Community Development Agencies, the National Association for County Community and Economic Development, the National Association of Housing and Redevelopment Officials, and the National Community Development Association—to obtain their views on the use of CDBG funds and HUD’s oversight of the program. To determine how the communities that receive CDBG funds use those funds, we reviewed CDBG program regulations to determine how recipients are allowed to use their funds. We then analyzed IDIS data on activities funded as of September 30, 2005, to determine (1) the activities most often funded by recipients in fiscal year 2005, (2) any differences between the activities most often funded by entitlement and state recipients in fiscal year 2005, and (3) the percentage of activities funded in fiscal year 2005 that met each of the three national program objectives. For examples of how communities use their funds, we relied on documentation provided by the 20 recipients we visited and pictures we took during our site visits. We had planned to use IDIS data to examine the extent to which CDBG recipients were complying with the statutory spending limits on public services and planning and administration but determined that (1) IDIS did not save some of the data needed to determine compliance by entitlement communities and (2) IDIS data cannot be used to determine states’ compliance with the limits. Therefore, we requested data from HUD showing the percentage of funds spent by selected recipients on public services as well as on planning and administration. We initially requested data on the 200 entitlement communities that received the most funding, but HUD could only provide data on the 100 most populous entitlement communities within our time frames. We then analyzed that data to determine how many had exceeded the two spending limits in program year 2004. We also analyzed HUD data to determine the number of recipients eligible for the special exception that allows certain recipients to count activities that benefit fewer than 51 percent low- and moderate- income persons as meeting the low- and moderate-income national objective in fiscal year 2006. To determine the status of HUD’s efforts to implement a performance measurement system, we reviewed the notices published in the Federal Register and guidance on HUD’s Web site as well as interviewed HUD officials. To identify how HUD monitors communities’ use of CDBG funds, we reviewed HUD’s monitoring guidance to determine which tools it uses to monitor recipients. To gain an understanding of HUD’s formal monitoring, we reviewed documentation on its risk analysis process and interviewed the HUD headquarters officials responsible for setting monitoring policy as well as HUD field staff responsible for performing the monitoring. We analyzed data from HUD’s Integrated Performance Reporting System (HIPRS) to determine if the Office of Community Planning and Development (CPD) met its monitoring goal in fiscal year 2005. We interviewed a knowledgeable agency official regarding the data and determined that they were sufficiently reliable for the purposes of this report. We also analyzed GMP data to determine (1) if HUD’s field offices complied with its risk analysis process in fiscal year 2005, (2) the extent to which HUD monitored CDBG recipients in fiscal years 2001 to 2005, and (3) what types of monitoring findings HUD had in fiscal year 2005. To determine the adequacy of HUD’s monitoring resources, we reviewed information on CPD staffing and travel budgets. To assess the usefulness of IDIS as a monitoring tool, we reviewed reports on the system and interviewed HUD field staff regarding their experiences with using the system. We also reviewed HUD’s plans for reengineering IDIS and discussed them with the responsible HUD official. To assess the extent to which the recipients we visited have complied with CDBG program regulations, we reviewed 144 project files. To identify projects for review, we requested that each recipient provide a list of the projects that they had awarded in calendar year 2003. We used the calendar year because recipients’ fiscal years vary, and we chose 2003 because we anticipated that projects would be well under way or complete by the time of our review. From the list that each recipient provided, we selected a stratified random sample of 6 to 10 projects; the number of files selected depended on the funding level of the recipient—more files were selected for recipients with larger grants. If we determined that a project selected for review was terminated after it was awarded, we selected a replacement project. When reviewing the files, we looked for (1) documentation showing that the activity funded met a national objective, (2) a subrecipient agreement that included the information required in the program regulations (if applicable), and (3) evidence that the recipient had monitored the activity. To determine the extent to which HUD has held recipients that have not complied with CDBG program requirements accountable for their actions, we reviewed the CDBG program regulations to determine what sanctions HUD can take against recipients. We reviewed HUD’s policy on timely expenditure of funds and analyzed data on the number of untimely recipients as of April 2006. We also analyzed GMP data to determine the number of sanctions that HUD had taken in fiscal years 2003 to 2005 and the specific types of sanctions it took in fiscal year 2005. In addition, we interviewed HUD field and headquarters staff to determine how they decide which sanctions to take against recipients. We performed our work from July 2005 to July 2006 in accordance with generally accepted government auditing standards. The methods that states use to distribute CDBG funds vary. To demonstrate the variety of methods used, we examined the approach that the following 10 states take when distributing their funds: Georgia, Colorado, Maryland, Massachusetts, North Carolina, New York, Ohio, Pennsylvania, Puerto Rico, and Texas. We selected these states because we visited the first four and the remaining six received the largest funding allocations in federal fiscal year 2005. To determine the method of distribution used, we reviewed each state’s fiscal year 2005 action plan. Table 6 provides information such as how each state allocates its funds among various activities, the evaluation criteria used to select applications, and incentives or application bonuses offered. During our review of the CDBG program, we visited 16 entitlement communities and four states. Additionally, we visited four nonentitlement communities funded by two of the states we visited (Georgia and Maryland). These recipients funded the following examples of public improvement, housing, public service, economic development, and acquisition activities. West Point, Georgia (a nonentitlement community) used $500,000 in CDBG funds awarded by the state of Georgia to build a new Boys and Girls Club (see fig. 8). According to the city’s application for funds, the old Boys and Girls Club did not have an accessible entry, had several leaks in the roof that could only be temporarily repaired, did not have load bearing walls, and had a mechanical system that appeared to be well beyond its reasonable life expectancy. The total budget for the project was $721,500, including operating costs. According to the application, the club plans on serving 200 children, 180 of which are from low- and moderate-income families. Poulan, Georgia (a nonentitlement community) utilized a $499,081 grant from the state of Georgia to replace a portion of the city’s corroding water pipes. At the time of the grant, the corroding water pipes restricted the amount of water that flowed through the water lines and caused the water to become discolored and rusty. According to local officials, the water that was fed through these water lines was not suitable for drinking, bathing, or cleaning clothes. In addition to the money provided by the state, the city of Poulan provided $40,000. Over 70 percent of residents that benefited from the new water lines had low- and moderate-incomes. The state of Maryland awarded the town of Denton (a nonentitlement community) $600,000 to make improvements to city streets. Specifically, the funding was used to install a new storm water management system, curbs, gutters, sidewalks, and paving. The CDBG grant provided $431,913 for construction, $148,087 for project administration and contingency, and $20,000 for general administration. The project was matched with a $566,950 loan and a $4,920 grant from the U.S. Department of Agriculture and $7,841 from the town of Denton. Attleboro, Massachusetts used $222,267 in CDBG funds to finance, in part, the reconstruction of the Fred E. Briggs Playground municipal pool and bathhouse, which is located in a census tract where 59 percent of the households are of low- and moderate-income. The city demolished the old pool, the bathhouse, the building that housed the filtration system, the walkways, and the fencing and constructed a brand new municipal pool and bathhouse facility (see fig. 9). The capital project was necessary to bring both the pool and bathhouse into compliance with federal, state and local building and health codes and to provide accessibility for persons with disabilities. The total project cost was $537,849. Kane County, Illinois used $28,592 in CDBG funds to finance the rehabilitation of the Corron Farm Park (see fig. 10), located in and owned by Campton Township. The structure was listed in the Kane County Register of Historic Places and was vacant and badly deteriorated when rehabilitation work began. The building will house a local history museum upon completion. Additionally, local officials told us that the investment of CDBG funds helped reinforce local efforts to protect open space in an area facing rapid growth and development. The overall funding for the project was $64,936, with Campton Township investing $36,344 in the project. Greeley, Colorado spent $236,000 in 2003 CDBG funds to continue its single-family housing rehabilitation program and provide emergency assistance to the elderly and persons with disabilities. Efforts were concentrated in areas targeted for urban renewal. Activities included in the housing rehabilitation were housing rehabilitation and weatherization, housing replacement, property acquisition, ramps for persons with disabilities and elderly, first-time home buyer’s program, and urban renewal (see fig. 11). In 2003, Atlanta, Georgia provided $350,000 in CDBG funds to Southeast Energy Assistance (SEA) for energy-related repairs to 225 homes owned by low-income residents. These repairs eliminate air leaks to make homes more energy efficient and reduce heating and cooling costs. SEA is a nonprofit organization that is a service provider for the federally funded Weatherization Assistance Program (WAP). WAP services include adding insulation to floors, walls, and attics; replacing or repairing damaged exterior doors and windows; and installing weather- stripping and caulking. In fiscal year 2003, the Commonwealth of Massachusetts awarded $1,118,125 in funding to the town of Oak Bluffs to rehabilitate 40 units of substandard housing in the towns of Oak Bluffs, Aquinnah, Chilmark, Edgartown, Tisbury, and West Tisbury. Low- and moderate-income persons residing in substandard housing were eligible to participate. Upon completion of the grant, a total of 47 units had been rehabilitated. The project utilized three loan options: a deferred payment loan, a deferral agreement loan, and a direct reduction loan. Ten loans were issued at or under $30,000, 23 loans were issued at or under $25,000, and 14 loans were issued at or under $20,000. Chicago, Illinois provided $5.9 million in CDBG funds to build Wentworth Commons. Wentworth Commons provides affordable housing to families and individuals that were formerly homeless or at risk of homelessness. To qualify to live at Wentworth, applicants must make 60 percent or less of the area median income. Overall, there are 51 units at the site: 24 efficiency apartments, 15 three-bedroom apartments, 9 two- bedroom apartments, and 3 four-bedroom apartments. The site also features supportive services such as case management, employment training, and leadership development. The building is environmentally friendly and energy efficient. It uses solar energy to generate electricity into the building’s electrical distribution system, which offsets electrical use. The total cost of the project was $13 million. Los Angeles, California runs a “Handyworker” Program that provides minor home repair services to low-income senior citizens or homeowners with disabilities. The program helps keep housing from deteriorating by funding repairs that homeowners could not otherwise afford. In program year 2004, the city budgeted $2,000,000 in CDBG funds for the program. Grants of up to $5,000 per client were available for repairs or home improvements that address home safety, accessibility, and security issues. Improvements include exterior and interior painting, minor finish work, the installation of disability grab bars and accesiblity ramps, minor plumbing, and other repairs. Through this program, the city is working to preserve the existing stock of affordable housing. The city’s goal for program year 2004 was to provide 1,552 households “Handyworker” services. Caroline County, Maryland (a nonentitlement community) began receiving state of Maryland CDBG funds in 2002 to rehabilitate housing for low- and moderate-income households. Since 2002, the county has received $575,000 in CDBG funds to rehabilitate 51 homes. Additionally, the county also received $17,250 in CDBG funds in 2003 to complete a housing study. The county told us that the CDBG funds have also helped the county leverage $10,250 from the U.S. Department of Agriculture for housing rehabilitation. Baltimore County, Maryland conducts a Single Family Rehabilitation and Emergency Repair Program. Since the inception of the program, the county has assisted nearly 1,850 income eligible households. In fiscal year 2005, the county spent $1 million in CDBG funds to assist 93 households. The program provides loans of up to $25,000 per home. The loans are then deferred until the sale, refinance, or transfer of property. During ownership, the county allows homeowners to make certain repairs and home improvements. Naperville, Illinois provided $19,223 in program year 2005 funding for the Loaves and Fishes Community Food Pantry (see fig. 12). The food pantry provides groceries that ensure a healthy diet to Naperville’s low-income and homeless clients. According to Loaves and Fishes, 3,000 Naperville residents live in poverty. On a weekly basis, the food pantry provides 250 families with the equivalent of three bags of groceries to last for a 2-week period. In 2005, the food pantry provided: services to over 1,500 families, home delivery to over 100 seniors and individuals with disabilities, and over 1,800 holiday food distributions. Denver, Colorado provided $50,000 in 2003 funding to Brothers Redevelopment Incorporated to provide the salaries and benefits for a director and two part time counselors. The director and part time counselors provided information, referrals and mortgage counseling for low- and moderate-income households in the Denver community. Santa Monica, California provided $242,442 in program year 2005 CDBG funding toward the SAMOSHEL homeless shelter. SAMOSHEL provides 110 shelter beds to homeless adults, and expects to serve up to 500 persons annually with their emergency shelter. Additionally, the shelter provides services such as access to medical and mental health services, permanent and transitional housing programs, domestic violence intervention, counseling and case management, and substance abuse recovery support and employment services. In fiscal year 2003, Warner Robins, Georgia provided $41,000 in CDBG funds to the Gateway Cottage. The Gateway Cottage program targets young homeless mothers recovering from substance abuse. The cottage provides housing and resources for a time span of 1 year while providing training in hygiene, personal finance, substance abuse, parenting, and daily living skills. The program networks with other service providers to link clients with job training, educational opportunities, and physical and mental health services. Upon graduation from the program, clients are eligible to apply for the aftercare component of the program, which is supportive housing in conjunction with supportive services. Beloit, Wisconsin provided $7,068 for the Beloit Chore Service Program in 2005. The program provides senior citizens with screened, qualified workers who will do home maintenance and repairs at affordable prices. The program staff screen workers and verify that they are qualified to perform the repair and maintenance work. The workers provide inexpensive home repairs, which allow seniors to remain independent and in their own homes. Baltimore, Maryland provided $80,700 in 2003 CDBG funds to the Belair- Edison Neighborhoods Incorporated. The funds were used to undertake several activities including prepurchasing, default and delinquency counseling, fair housing counseling and education, homeownership workshops, and public information and technical assistance to businesses in the Belair-Edison area of Baltimore. In fiscal year 2005, Boston, Massachusetts designated $856,697 in CDBG funds for its Boston Main Streets program. The city of Boston provided funding and technical assistance to 19 neighborhood-based Main Streets districts throughout the city. The program helps the local districts capitalize on their unique cultural and historical assets while focusing on the community's economic development needs. Examples of activities funded under the program include small business recruitment, business retention, and addressing competition from shopping malls and discount retailers. From 1995 to December 2005, the city created 540 new businesses and 3,643 new jobs, and leveraged $9,645,644 in additional private investment through the program. Dubuque, Iowa provided a $500,000 CDBG loan to Heartland Financial in April 2003 as an incentive to select a downtown location for the company's expansion of 47 new jobs (see fig. 13). The $4.5 million project provided for the renovation of two downtown buildings both of which are on the National Register of Historic Places. In addition, it provided for reuse of the vacant buildings, retained a workforce in the downtown, and created new jobs for low- and moderate-income persons. As of the 2003/2004 fiscal year, the city of Gardena, California had expended $490,755 in CDBG funds revitalizing their Van Ness Corridor. The goal of the revitalization was to strengthen the economic vitality of the city, provide employment opportunities, stimulate quality retail development, and create a sustainable economic base for the city. The city provided funds to businesses along the corridor to eliminate slum and blight. CDBG assistance has included financial assistance for facade and exterior improvements, providing block wall and infrastructure improvements along the corridor, conducting a business survey to develop and implement a business outreach program, and providing an on-going graffiti abatement and removal program. The state of Colorado provided $250,000 in CDBG funds to help a health clinic in Lafayette, Colorado acquire property to build a new facility. Clinica Campesina is a community health center serving the needs of the low-income, uninsured residents of Southeastern Boulder, Broomfield, and Western Adams Counties. Ninety-six percent of the patients that the clinic serves are at or below 200 percent of the federal poverty line. The clinic’s patients are predominately children under the age of 13 (38 percent) and women of childbearing age (28 percent). The total project budget was $1.3 million. The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated July 11, 2006. 1. We agree that it is important to monitor compliance with administration and planning and public service spending caps. However, our report emphasizes HUD’s need to centrally maintain data on compliance with statutory spending limits so that it can report on the extent of compliance; therefore, we made no change to the report in response to this comment. 2. The CPD staffing/hiring plan was approved in June 2006 and was provided to us along with HUD’s written agency comments. Because the plan was provided at the close of this engagement this report does not evaluate the extent to which the plan addresses identified workforce needs. 3. The guidance that HUD references was issued in the 1990s. When we interviewed the Director of CPD’s Office of Field Management and field office staff regarding the monitoring of CDBG recipients, they stated that they were following the new CPD Monitoring Handbook, which was issued in September 2005. The introduction to this handbook states that it establishes standards and provides guidance for monitoring CPD programs, including CDBG. Beyond referring field staff to various sections of the program regulations, the new handbook does not describe what conditions its field staff should consider when taking corrective actions and what specific conditions warrant different types of corrective actions. Because we believe that HUD needs a consistent framework for holding CDBG recipients accountable for deficiencies identified during monitoring, we made no change to the report. 4. Our report acknowledges that any additional guidance that HUD develops for its field staff taking sanctions could allow for the consideration of individual situations. Because individual situations may vary, we stated that such guidance could establish a framework, or parameters, within which field offices should operate. Although HUD points to several forms of guidance in its comment, none of them specifically addresses the concerns raised in this report. The regulatory language in 24 C.F.R. 570.910(a) states that corrective actions should be designed to (1) prevent a continuation of the performance deficiency; (2) mitigate, to the extent possible, the adverse effects or consequences of the deficiency; and (3) prevent a recurrence of the deficiency. While this language establishes the purpose of taking sanctions, it does not provide parameters that help field staff determine which specific corrective sanction is appropriate to address the deficiency identified. Section 2-8.B. of the CPD Monitoring Handbook describes HUD’s basis for determining whether a deficiency should result in a finding or concern, but it does not help field office staff determine which sanction may be appropriate if the deficiency results in a finding. Finally, as we mentioned in our response to the previous comment, the additional handbook HUD referenced was issued in 1992, while the CPD Monitoring Handbook was issued in 2005. Given the great flexibility that exists when taking sanctions, we believe it would be useful to provide field office staff further guidance to ensure they are treating recipients that commit similar infractions equitably. 5. We revised the report to include the suggested text. 6. We revised the text to make it clear that the 29 exhibits we mention are in the two handbook chapters that are specific to the CDBG program. 7. We agree that the meetings referenced by HUD can be helpful in sharing information on current operational issues with IDIS. However, the meetings that HUD has referenced are either regularly scheduled management meetings or training on HUD’s new performance measurement system. None of these meetings are the field office sessions that are specifically mentioned in the statement of work for the reengineered IDIS system. When we asked about the status of sessions that the statement of work said would be held with field staff regarding user requirements, accomplishment reporting, and proposed navigation approaches, the HUD official that is overseeing development of the new system stated that these sessions would not be held until late summer 2006 at the earliest, although a functional requirements document had already been drafted. Further, additional statements made by that official and HUD’s written comments indicate that the focus of future meetings with field staff will only be on reporting requirements. We continue to believe that soliciting input from end users on system requirements is consistent with best practices for system development and recommend that field office staff should participate in project management throughout the system’s life cycle to ensure that the completed system supports both HUD’s business needs and the end user field office needs. 8. We agree that monitoring low- or medium-risk grantees can serve a useful and valid program purpose, especially considering the large number of grantees designated as such. The report acknowledges that HUD policy permits the monitoring of medium- and low-risk recipients by noting that they can be reviewed using remote, or off-site, monitoring. Therefore, we made no change to the report. 9. We agree that monitoring recipients is critical to fulfill statutory and regulatory responsibilities to assess compliance as well as carry out stewardship responsibilities. In our report, we are providing one reason why monitoring is critical, not an all-inclusive list, so we did not change the report. 10. We revised the text as suggested. 11. We agree that grant monitoring is a critical stewardship responsibility. This section of our report is highlighting the fact that program funding cuts are being made at the same time as the number of grant recipients is increasing, which creates challenges as HUD carries out its stewardship responsibilities. 12. We agree that it is easier to develop policies and procedures to address timeliness deficiencies than it is to develop guidance that addresses the myriad of deficiencies identified during monitoring. However, given the importance of holding CDBG recipients accountable for how they use their funds, we recommend that HUD consider issuing additional guidance for field staff that establishes the parameters within which field offices should operate and provides greater transparency to the sanctioning process. In addition, Paul Schmidt, Assistant Director; Nima Patel Edwards; Cynthia Grant; Curtis Groves; Alison Martin; John McGrail; Marc Molino; David Noguera; David Pittman; Nitin Rao; and Paige Smith made key contributions to this report.
|
The Community Development Block Grant (CDBG) program provides funding for housing, economic development, and other community development activities. In fiscal year 2006, Congress appropriated about $4.2 billion for the program. Administered by the Department of Housing and Urban Development (HUD), the CDBG program provides funding to metropolitan cities and urban counties, known as entitlement communities, and to states for distribution to nonentitlement communities. This report discusses (1) how recipients use CDBG funds, including the extent to which they comply with spending limits, (2) how HUD monitors recipients' use of CDBG funds, and (3) how HUD holds recipients that have not complied with CDBG program requirements accountable. To address these objectives, we visited 20 recipients, analyzed HUD data, and interviewed HUD staff. HUD data show that CDBG recipients spend the largest percentage of their grants on public improvements (such as water lines and streets) and housing, but HUD does not centrally maintain the data needed to determine compliance with statutory spending limits. Due to the lack of centralized data, GAO was not able to determine the extent to which all recipients have complied with statutory spending limits on public services (such as health and senior services) and administration and planning. However, data provided by HUD for the 100 most populous entitlement communities, which received about one-third of the CDBG funds allocated in fiscal year 2006, showed that not all of these entitlement communities complied with the limits. Of the 100 communities, 3 exceeded their public service spending limit, and 1 exceeded the administration and planning spending limit. Given that entitlement communities collectively spend at or close to the limits, it is important for HUD to be able to report on the extent of their individual compliance with these limits. HUD uses a risk-based approach to monitor CDBG recipients; however, it has not developed a plan to replace monitoring staff or fully involved its field staff in plans to redesign an information system they use to monitor recipients. HUD's monitoring strategy calls for its field offices to consider various risk factors when determining which recipients to review because it has limited monitoring resources, and its workload has increased as its staffing levels have decreased. For example, 13 of the 42 field offices that oversee CDBG recipients do not have a financial specialist to evaluate the financial operations of each recipient, and 39 percent of CDBG monitoring staff is eligible to retire within the next 3 years. Despite these statistics, HUD has not developed a plan to hire staff with needed skills or manage upcoming retirements. Finally, although the Integrated Disbursement and Information System (IDIS) is a tool that HUD field staff use to monitor, HUD headquarters has solicited little input from them on efforts to redesign IDIS. Although it has issued a clear policy stating what actions it will take when entitlement communities fail to meet the statutory requirement that funds be spent in a timely manner, HUD has not developed similar guidance establishing a consistent framework for holding CDBG recipients accountable for deficiencies identified during monitoring. For deficiencies other than being slow to expend funds, HUD has the flexibility to institute sanctions ranging from issuing a warning letter to advising the recipient to return funds. Although its field offices have great flexibility when taking sanctions, HUD has not issued guidance establishing a framework to ensure that they are treating recipients that commit similar infractions equitably. We found instances in fiscal year 2005 where treatment seemed inconsistent. For example, several field offices found that recipients had not documented that a funded activity met any one of the program's three national objectives, but took different actions. In the continued absence of guidance, HUD lacks a means to better ensure consistency in the sanctioning process.
|
Reading First, which was enacted as part of NCLBA, aims to assist states and local school districts in establishing reading programs for students in kindergarten through third grade by providing funding through 6-year formula grants. The goal of the program is to ensure that every student can read at grade level or above by the end of third grade. To that end, Reading First provides funds and technical assistance to states and school districts to implement programs supported by scientifically-based reading research (SBRR), increase teacher professional development based on this research, and select and administer reading assessments to screen, diagnose, and monitor the progress of all students. NCLBA defines SBRR as research that (1) uses systematic, empirical methods that draw on observation or experiment; (2) involves rigorous data analyses that test stated hypotheses and justify general conclusions; (3) relies on measurements or observational methods that are valid; and (4) has been accepted by a peer-reviewed journal or approved by a panel of independent experts. Further, NCLBA requires states to adopt reading programs that contain the five essential components of reading-- (1) phonemic awareness; (2) phonics; (3) vocabulary development; (4) reading fluency, including oral reading skills; and (5) reading comprehension strategies. While Education has responsibility for overseeing the Reading First program and states’ implementation and compliance with statutory and program requirements, NCLBA places restrictions on what Education officials can require states to do. Specifically, Education is not authorized to mandate, direct, control, or endorse any curriculum designed to be used in elementary or secondary schools. Further, when Education was formed in 1979, Congress was concerned about protecting state and local responsibility for education and, therefore, placed limits in Education’s authorizing statute on the ability of Education officials to exercise any direction, supervision, or control over the curriculum or program of instruction, the selection of textbooks or personnel, of any school or school system. Every state could apply for Reading First funds, and states were required to submit a state plan for approval that demonstrates how they will ensure that statutory requirements will be met by districts. Education, working in consultation with the National Institute for Literacy (NIFL), as required in NCLBA, established an expert review panel composed of a variety of reading experts to evaluate state plans and recommend which plans should be approved. In these plans, states were required to describe how they would assist districts in selecting reading curricula supported by SBRR, valid and reliable reading assessments, and professional development programs for K-3rd grade teachers based on SBRR. The law does not call for Education to approve or disapprove particular reading programs or curricula identified in state plans. When appropriate, the peer review panel was to also recommend clarifications or identify changes it deemed necessary to improve the likelihood of a state plan’s success. NCLBA requires that Education approve each state’s application only if it meets the requirements set forth in the law. Reading First allows states to reserve up to 20 percent of their funds for professional development; technical assistance; and planning, administrative, and reporting activities. For example, states can use their funds to develop and implement a professional development program to prepare K-3rd teachers in all essential components of reading instruction. One model for supporting teachers’ reading instruction involves hiring a Reading Coach who works with teachers to implement reading activities aligned with SBRR. Almost all states require Reading First schools to have a Reading Coach tasked with supporting teachers and principals with instruction, administering assessments, and interpreting assessment data. States that receive Reading First grants are required to conduct a competitive sub-grant process for eligible school districts and must distribute at least 80 percent of the federal Reading First grants they receive to districts. NCLBA and Education guidance provides states with flexibility to set eligibility criteria for school districts so that eligible districts are among those in the state that have the highest number or percentage of K-3rd grade students reading below grade level and (1) have jurisdiction over an empowerment zone or enterprise community, (2) have a significant number or percentage of schools identified as in need of improvement, or (3) are among the districts in the state that have the highest number or percentages of children counted as poor and school- aged for the purposes of Title I. NCLBA establishes priorities that states must consider when awarding a Reading First sub-grant, while also allowing states to establish other priority areas. For instance, NCLBA requires that the state sub-grant process give priority to districts with at least 15 percent of students or 6,500 children from families with incomes below the poverty line, but states also have some flexibility to establish additional priorities, such as a demonstrated commitment to improving reading achievement. The sub-grant process along with the criteria at each stage is summarized in figure 1. Districts are required to use their sub-grant funds to carry out certain activities identified in NCLBA. For example, districts must use these funds to select and implement reading programs based on SBRR that include the essential components of reading instruction, to select and implement diagnostic reading assessment tools, and to provide professional development opportunities for teachers. Additionally, districts are permitted to use Reading First funds in support of other activities, such as training parents and tutors in the essential components of reading instruction. States are required to report to Education annually on the implementation of Reading First, including their progress in reducing the number of students who are reading below grade level. Additionally, states are required to submit a mid-point progress report to Education at the end of the third year of the grant period. These mid-point progress reports are subject to review by the same expert peer review panel that evaluated state applications. If Education determines, after submission and panel review of a state’s mid-point progress report and on the basis of ongoing Education monitoring, that a state is not making significant progress, Education has the discretion to withhold further Reading First grant payments from that state. While these state reports to Education are intended to provide information on the effectiveness of Reading First, Education is also required to contract with an independent organization outside Education for a rigorous and scientifically-valid, 5-year, national evaluation of the program, with a final report scheduled to be issued in 2007. The Reading First program has relied on several key contractors to perform a number of program functions. For example, Education officials hired RMC Research Corporation, a company that provides research, evaluation, and related services to educational and human services clients, to provide technical assistance to states and districts that have received Reading First funding. According to Education officials, RMC contractors were tasked initially with providing specific, individualized guidance on the application process to state officials who requested it. RMC later became the national coordinator for the contract overseeing the National Center for Reading First Technical Assistance and its three regional subsidiaries: the Eastern Regional Reading First Technical Assistance Center (ERRFTAC) in Tallahassee, Florida; the Central Regional Reading First Technical Assistance Center (CRRFTAC) in Austin, Texas; and the Western Regional Reading First Technical Assistance Center (WRRFTAC) in Eugene, Oregon. In this role, RMC staff provides support to the TACs and their employees, as well as weekly coordination among the TACs, and regular training seminars. Operated out of universities recognized by Education officials for their expertise in SBRR and related areas, the centers began operations in 2003 and are responsible for providing an array of technical assistance activities to states, including national and regional conferences, training and professional development, products and materials, and liaisons to national reading experts. Education officials also contracted with Learning Point Associates to provide technical assistance to states as they launched their sub-grant competitions. Once Reading First sub-grants had been awarded to local districts, Education contracted with the American Institutes for Research (AIR), a behavioral and social science research organization, to conduct annual monitoring visits to each state. These visits incorporate sessions with state officials, as well as visits to a few districts in each state and are designed to assess states’ and districts’ compliance with their approved plans. After each monitoring visit, AIR representatives submit a report, including any findings of non-compliance, to Reading First officials. Reading First officials are to forward these reports to the cognizant state officials. States reported that there have been a number of changes and improvements in reading instruction since the implementation of Reading First. There has been an increased emphasis on the five key components of reading, assessments, and professional development with more classroom time being devoted to reading activities. However, according to publishers we interviewed, there have been limited changes to instructional material. Similarly, many states that approved reading programs for districts to choose from report few changes to their lists of approved programs. In responding to our survey, 69 percent of all states reported great or very great improvement in reading instruction since inception of Reading First. One area in which states reported a change that may have contributed to improvement of reading was the degree to which classroom instruction explicitly incorporated the five key components. In our survey, at least 39 states reported that Reading First schools had incorporated each of the five required components of reading into curriculum to a great or very great degree as a result of Reading First. State and local officials we talked to during some of our site visits reinforced this opinion and in particular noted that Reading First teachers had awareness of and were more focused on the five components. In addition, the increased time devoted to reading activities under Reading First may have contributed to improvement. Several district officials we met with told us they were including a protected, uninterrupted block of time for reading instruction of 90 minutes or more per day—which the department’s Guidance for the Reading First Program lists as a key element of an effective reading program. Education’s Reading First Implementation Evaluation: Interim Report (The Interim Report) also found that Reading First teachers reported allocating over 90 minutes per day, on average, for a designated reading block. States officials reported improvement in reading instruction resulting from the use of assessments. In responding to our survey, one state official said, “One of the strengths of the Reading First program has been its strong adherence to SBRR and to the use of valid and reliable assessments in guiding instruction and program evaluation.” A number of state and local officials we interviewed reported that the use of assessments changed after Reading First, especially in the way that teachers use data from these assessments to better inform reading instruction. Specifically, district officials we talked to during our site visits reported that teachers review students’ assessment results to determine the areas in which they need more targeted instruction. One official also reported that assessment data can sometimes be used to identify successful teachers from whom other teachers can learn teaching techniques, with one official asserting that “Reading First has and is making a great impact on teachers’ instructional practices, techniques, and strategies.” Also, according to Education’s Interim Report, researchers estimated that 83 percent of Reading First teachers cited assessment results essential to organizing instructional groups, 85 percent cited the results essential to determining progress on skills, and 75 percent cited the results essential to identifying students who need reading intervention. According to our survey, most states also reported that the assessments they used differed greatly or very greatly from the ones they used prior to Reading First. States reported a wide variety of reading assessments on their state-approved lists, with over 40 different assessments listed. By far, the most frequently approved assessment was Dynamic Indicators of Basic Early Literacy Skills (DIBELS), approved by 45 states. Also, a few states reported to us that they were moving toward a more uniform systematic assessment system for the first time, whereas previously each school could choose which assessment it would use. Some state and district officials told us that having a more uniform and systematic assessment was beneficial, because, for instance, it allowed the officials to track and compare reading scores more easily. Professional development is another area in which state officials noted improvement. All states reported improvement in professional development as a result of Reading First, with at least 41 states reporting that professional development for reading teachers improved greatly or very greatly in each of five key instructional areas. Further, a considerable majority of states reported great or very great increases in the frequency of professional development and the resources devoted to it, 45 and 39, respectively. One state reported, “The provision of funding to be used to support statewide professional development efforts for K-3 reading has been an important aspect of the program.” The Interim Report on the Reading First program highlights that a vast majority of Reading First teachers had received training on the five key components of reading. In our site-visits, district officials confirmed that, for the most part, teachers in their Reading First classrooms had received training. However, in responding to our survey, 19 states did report some challenges in training of 100 percent of Reading First teachers, with teacher turnover cited by 12 states as the reason some Reading First teachers might not have taken any type of Reading First training. Figure 2 summarizes reported improvements in professional development for teachers. Professional development was provided by a variety of federal, state, and private sources. Staff from the TACs and officials from at least one state reported providing professional development to districts customized to the individual district’s needs and perceived future needs. Education’s Interim Report on Reading First implementation noted that state Reading First coordinators in 33 states reported that state staff chose and organized all statewide professional development efforts and played a key role in selecting professional development topics for districts and schools. In addition, publishers we spoke with told us they often provide training to acclimate teachers to their products. Certain publishers of major commercial reading programs and assessments told us that since the implementation of Reading First, districts demand much more training. Specifically, according to some of the publishers and TAC staff we spoke with, districts have been interested in more in-depth workshops on particular topics such as teaching techniques and using and interpreting assessments. Finally, another aspect of professional development pertinent to Reading First is the presence of a Reading Coach. State and district officials reported that Reading Coaches receive training that better enables them to assist schools. Education’s Interim Report found that each Reading Coach worked with an average of 1.2 schools and with 21 teachers to help implement activities aligned with SBRR. Three of the four major publishers of reading programs we spoke with reported that they had not made significant changes to the content of their reading programs as a result of Reading First. Two publishers stated that they made minor changes to their reading materials to make more explicit how the content of the existing programs align with the five components emphasized in Reading First. Two of them reported that they made changes to their programs based on the National Reading Panel’s findings, which was prior to the enactment of Reading First. For example, representatives of one company stated that they launched a new reading program based on the findings of the National Reading Panel that takes into account the requirements of Reading First. Despite limited changes to the actual instructional material, all the publishers noted a greater emphasis on assessing the efficacy of their reading programs as a result of Reading First. In an effort to measure the effectiveness of their programs, the publishers reported devoting more effort to research and to evaluate how effective their reading programs were at raising reading assessment scores. States followed two main approaches in selecting reading programs for districts —22 identified a state-approved list of programs for districts to select, while the other 29 did not have a state-approved list, thereby requiring districts in those states to self-select reading programs and determine, with some state oversight and subject to state approval, whether they satisfy the requirements of SBRR. Of the 22 states with approved lists, reading program publishers most frequently represented on the lists were Houghton Mifflin, McGraw-Hill, and Harcourt (see table 1). At the school level, Education found in its Interim Report that these three reading program publishers were also the most frequently used, estimating that between 11 and 23 percent of schools used programs from one of them. Additionally, of the 22 states that identified a list of approved core reading programs for Reading First, 8 already had a list of approved core reading programs for adoption by all schools in their state prior to Reading First. Only two of these states reported removing reading programs—a total of six—from their lists because they did not meet Reading First requirements. According to Education’s Interim Report, an estimated 39 percent of Reading First schools reported adopting a new core reading program at the beginning of the 2004-2005 school year in which they received their Reading First grant, in contrast with an estimated 16 percent of non-Reading First Title I schools. States used a variety of sources to help them identify and select reading programs that met Reading First’s criteria. For example, 15 of the 22 states with state-approved lists reported using the Consumer’s Guide to Evaluating A Core Reading Program Grades K-3: A Critical Elements Analysis to make this decision. Other frequently used resources include criteria in the state’s application for Reading First, information obtained at Reading First Leadership Academies provided by Education, and other states’ approved lists. Based on responses to our survey, the table below summarizes approaches states used to develop their approved lists (see table 2). Based on our survey results, 25 of the 29 states reporting that they did not have a list of approved core reading programs said they provided guidance for districts and schools to identify core reading programs. Fifteen of these states reported directing districts and schools to conduct a review of reading programs using A Consumer’s Guide to Evaluating a Core Reading Program. Other states reported providing a variety of guidance to districts to help them select reading programs supported by SBRR, including referring them to the approved lists of other states and reviews conducted by academic experts. States varied in how they exercised their flexibility to set additional eligibility and award criteria as allowed by the Reading First program, and some states reported difficulty with implementing key aspects of the Reading First program while other states did not. In the areas in which they were given flexibility, states used a variety of criteria for determining eligibility and in awarding sub-grants to eligible districts, such as awarding grants to districts that had previously received federal reading dollars. Education reported that over 3,400 school districts were eligible to apply for Reading First sub-grants in the states’ first school year of funding. Of these districts, nearly 2,100 applied for and nearly 1,200 received Reading First sub-grants in the states’ first school year of funding. In addition, 22 states reported that it was difficult or very difficult to help districts with reading scores that had not improved sufficiently. On the other hand, 28 states reported that it was easy or very easy to determine whether districts’ applications met criteria for awarding sub-grants. States varied in how they exercised their flexibility to set school district eligibility criteria for sub-grants. The Reading First program provides states with some flexibility to define eligibility criteria within the statutory guidelines. For instance, while Reading First requires that states target districts with students in kindergarten through third grade reading below grade level, states have flexibility to set eligibility criteria based on the percentage and/or number of these students within districts. While 34 states reported electing to base eligibility on a percentage of schools with students reading below grade level, 18 states reported electing to base eligibility on a number of students reading below grade level. After applying eligibility criteria, Education reported that states determined that over 3,400 school districts were eligible to apply for Reading First sub- grants for states’ first school year of funding, or about 20 percent of all school districts nationwide. However, the percentage of eligible districts varied greatly across the states, ranging from about 3 to 93 percent. Of those districts eligible to apply, 62 percent, or nearly 2,100 districts, did so, as summarized in figure 3 below. States reported a variety of reasons why eligible school districts did not apply such as the prescriptive nature of the program, differences in educational philosophy, and inadequate resources for the application process. For example, officials from a few states reported that some districts did not have the capacity to write the grant application. An official from one state reported that some districts did not have the time and the staff to complete the sub-grant process. Furthermore, an official from another state reported that the application process was too lengthy and time-consuming to complete. Nineteen states reported in our survey that they exercised flexibility in establishing priorities when awarding Reading First sub-grants. States set a variety of additional priorities for awarding grants to school districts. For instance, six states reported that they gave priority to districts that already had other grants, such as Early Reading First grants, or indicated that they could somehow use their Reading First funds in combination with other resources to maximize the number of students reading at grade level. In contrast, two states gave priority to districts that had not received other grant funding. In addition, two states gave priority to districts based on the population of Native Americans or students with limited English proficiency. After applying selection criteria, states awarded Reading First sub-grants to about 34 percent or nearly 1,200 school districts for states’ first school year of funding. This represented about 56 percent of the 2,100 eligible districts that applied and nearly 7 percent of all school districts nationwide for states’ first school year of funding (see fig. 3). Some states reported difficulty in implementing key aspects of the Reading First program. Twenty-two states reported that it was either difficult or very difficult to help districts with reading scores that had not improved sufficiently. Officials from one state said that this was difficult because it requires close examination of students reading deficiencies and the commitment of school leadership. Officials from another state reported some difficulty in improving selected reading skills of students with limited English proficiency, which are concentrated in pockets around the state. Seventeen states reported that it was either difficult or very difficult to assess how districts applied SBRR in choosing their reading program. Finally, seven states reported difficulty implementing four or more of six key program aspects listed in our survey and shown in figure 4. Officials from one of these states told us that the difficulty with implementation was due to the newness of the program for which everything had to be developed from scratch. On the other hand, states reported ease implementing other key aspects. In particular, 28 states reported that it was easy or very easy to determine whether districts’ applications met criteria for awarding sub-grants. For example, states are required to determine whether districts will adhere to the key components of the program, such as developing a professional development program or using reading assessments to gauge performance. Several states we interviewed suggested that it was easy to make this determination because some of the Reading First requirements were already in place in their states before Reading First was implemented. For example, some state officials we interviewed mentioned using reading assessments prior to Reading First. In addition, officials in one state told us that they already had a professional development program in place to train teachers on the state’s reading program. Twenty-four states reported that it was easy or very easy to identify reading programs based on SBRR. Education officials provided states a wide variety of guidance, assistance, and oversight, but Education lacked written procedures to guide its interactions with the states and provided limited information on its monitoring procedures. Education’s guidance and assistance included written guidance, preparatory workshops, feedback during the application process, and feedback from monitoring visits. Additionally, guidance and assistance were provided by Education’s contractors, including the regional technical assistance centers. For the most part, state officials characterized the guidance and assistance they received from Education officials and contractors, especially the regional technical assistance centers, as being helpful or very helpful, and many also reported relying on the expertise of Reading First officials in other states. However, Education lacked controls to ensure that its officials did not endorse or otherwise mandate or direct states to adopt particular reading curricula. For example, according to state officials, Education officials and contractors made suggestions to some states to adopt or eliminate certain reading programs, assessments, or professional development providers. In addition, some state officials reported a lack of clarity about key aspects of the annual monitoring process, including time frames and expectations of states in responding to monitoring findings. Education provided a variety of written and informal guidance and assistance to states to help them prepare their applications. For example, three months after the enactment of NCLBA in January 2002, Education issued two key pieces of written guidance to states pertaining to the Reading First program and grant application process: the Guidance for the Reading First Program and Criteria for Review of State Applications. Education officials also sponsored three Reading Leadership Academies in the early part of 2002. The Academies were forums for state education officials to obtain information and build their capacity to implement key aspects of the Reading First program, including professional development and the application of SBRR. Education contracted with RMC Research Corporation to provide technical assistance to states related to the grant application process. States reported seeking guidance from RMC on various aspects of the Reading First application, in particular the use of instructional assessments (17 states) and instructional strategies and programs (14 states). Throughout the application process, both Education and RMC officials were available to address states’ questions. In particular, Education officials provided feedback to states on the results of expert review panel evaluations of their applications. Consequently, a large number of states reported that Education required them to address issues in their applications, most commonly related to the use of instructional assessments (33 states) and instructional strategies and programs (25 states). See figure 5 for issues raised about state applications. Forty-eight states reported that they needed to modify their application at least once, and 27 reported modifying them three or more times. Once grants were awarded, Education continued to provide assistance and contracted with RMC Research to oversee three regional TACs to help states implement Reading First. RMC established three TACs affiliated with state university centers in Florida, Texas, and Oregon, which RMC and TAC officials told us were selected based on their expertise in one or more areas central to the success of the Reading First program, such as professional development or reading assessment. Each technical assistance center was responsible for providing comprehensive support to each of the states in its geographic region (see fig. 6). States reported that they looked to these centers for guidance on a variety of issues, especially creating professional development criteria, using reading assessments, and helping districts with reading scores that had not improved sufficiently. According to TAC staff, some of the most common requests they receive pertain to the use and interpretation of assessment data and use of Reading Coaches. TAC staff also told us that they catalog recurring issues or problems. In addition, according to one RMC official and some state officials, the TACs provided support to states during implementation to help them supplement their capacity and expertise in evaluating whether or not reading programs proposed by districts were based on SBRR. For instance, staff from the TAC in Florida explained that some states in their region had asked for assistance in evaluating reading programs that had been in use prior to Reading First to gauge their compliance with the requirements of Reading First. Staff from the TAC emphasized that in reviewing these reading programs, they used the criteria in each state’s approved state plan as the criteria for determining compliance with Reading First requirements. Officials in one state explained that while the staff at their state educational agency (SEA) possessed the knowledge necessary to conduct reviews of reading programs, scarce state staff resources would have made it difficult to conclude the reviews in the short time frame available. Though Education officials were aware of and initially condoned the TAC review process, Education officials advised all TACs to discontinue reviews of programs—to avoid the appearance of impropriety—after allegations were raised about Reading First officials expressing preference for specific reading programs. (Table 3 provides a summary of the types of guidance and assistance provided by Education and its contractors.) During the application and implementation phases of the Reading First program, many states came to rely on other unofficial sources of guidance, including other states’ Reading First officials, in addition to the written guidance provided by Education. For example, as noted earlier, among the 22 states that had an approved list of reading programs for Reading First districts, 15 reported using A Consumer’s Guide to Evaluating a Core Reading Program to assist them in reviewing potential reading programs. In addition, officials from 21 states reported that other states’ Reading First Coordinators provided great or very great help during the Reading First state grant application process. Further, a number of state officials reported using the information from other states’ websites, such as approved reading programs, to help inform their own decisions pertaining to the selection of reading programs. One state official explained, “With our limited infrastructure and dollars, we were never able to muster the resources needed to run an in-house programs review,” and further that, “It worked well for us to use the programs and materials review results from larger states that ran rigorous review processes.” Another state official reported that the state did not feel equipped to apply the principles of SBRR in evaluating reading programs and responded by comparing one state’s review and subsequent list of reading programs to those of a few other states to make judgments about allowable programs. Most states reported making use of and being satisfied with the primary sources of guidance available to them over the course of the Reading First application and implementation processes. For example, 46 states reported making use of the two key pieces of Education’s written guidance in preparing their Reading First applications. A majority of states also reported that these pieces of guidance provided them with the information needed to adequately address each of the key application components. For example, over 40 states reported that the guidance related to the definition of sub-grant eligibility and selection criteria for awarding sub-grants helped them adequately address these areas in their application. However, officials in eight states reported that the guidance on the use of instructional assessments did not provide them with the information needed to adequately address this area. (See fig. 7.) Overall, most state officials were also satisfied with the level of assistance they received from Education staff and their contractors in addressing issues related to the Reading First application and implementation processes. For example, state officials in 39 states reported that Education staff were of great or very great help during the application or implementation process. Additionally, officials from 48 states reported that Education officials were helpful or very helpful in addressing states’ implementation-related questions, which frequently dealt with using reading assessments and helping districts with reading scores that had not improved sufficiently. A number of state officials reported to us that they appreciated the guidance and attention they received from Reading First officials at Education. For example, one state Reading First Coordinator reported, “the U.S. Department of Education personnel have been wonderful through the process of implementing Reading First. I can’t say enough about how accessible and supportive their assistance has been.” Another state official remarked that the state’s efforts to make reading improvements “would have been impossible without their [Education officials and contractors] guidance and support.” Even officials from one state who had a disagreement with Education over its suggestion to eliminate a certain reading program characterized most of the guidance they received from Reading First officials as “excellent.” However, one state official reported feeling that the technical assistance workshops have served as conduits for Education officials to send messages about the specific reading programs and assessments they prefer. Another state official reported that, “core programs and significant progress have not been defined” and that “SBRR programs are not clearly designated.” According to responses obtained to our survey, the three TACs also provided a resource for states seeking advice on issues pertaining to the implementation of their Reading First programs. Specifically, 41 states cited the Centers as helpful or very helpful in addressing states’ inquiries related to the implementation of Reading First. In addition, on a variety of key implementation components, more state officials reported seeking information from their regional TACs than they did from Education officials (see table 4). We found that Education developed no written guidance, policies, or procedures to direct or train Education officials or contractors regarding their interactions with the states. Federal agencies are required under the Federal Managers’ Financial Integrity Act of 1982 to establish and maintain internal controls to provide a reasonable assurance that agencies achieve objectives of effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations. When executed effectively, internal controls work to ensure compliance with applicable laws and regulations by putting in place an effective set of policies, procedures, and related training. We found that Education had not developed written guidance or training to guide managers on how to implement and comply with statutory provisions prohibiting Education officials from directing or endorsing state and local curricular decisions. Department officials told us that it was their practice that program managers should consult the Office of General Counsel if they had questions regarding interactions with grantees. Reading First officials told us that it was their approach to approve each state’s method and rationale for reviewing or selecting reading programs as outlined in each state’s plan and that state compliance with program requirements, including adherence to the principles of SBRR, would then be assessed using the provisions of these plans as the criteria. Similarly, officials from Education’s contractors responsible for conducting monitoring visits told us that they were instructed by Education to use state plans as the criteria for gauging states’ compliance with Reading First reading program requirements, but that they were provided no formal written guidance or training. A senior Education attorney who is currently working with Reading First program officials told us that he was not aware that they had used this approach and that he felt that the statutory requirements should also play an important role in the monitoring process. Following the publication of the IG’s report in September, Education’s Office of General Counsel has provided training to senior management on internal control requirements and has begun working with the Reading First office to develop procedures to guide the department’s activities. Despite the statutory prohibition against mandating or endorsing curricula and the department’s stated approach to rely on state plans, and the processes articulated in them, to assess compliance, states reported to us several instances in which Reading First officials or contractors appeared to intervene to influence their selection of reading programs and assessments. For example, officials from four states reported receiving suggestions from Education or its contractors to adopt specific reading programs or assessments. Specifically, two states reported that it was suggested that they adopt a particular reading assessment. Similarly, Education’s first IG report also documented one instance in which Reading First officials at Education worked in concert with state consultants to ensure that a particular reading program was included on that state’s list of approved reading programs. In addition, states reported that Education officials or contractors suggested that they eliminate specific reading programs or assessments related to Reading First. Specifically, according to our survey results, officials from 10 states reported receiving suggestions that they eliminate specific programs or assessments. In some cases, the same program was cited by officials from more than one state. In one instance, state officials reported that Education officials alerted them that expert reviewers objected to a reading program that was under consideration but not named explicitly in the state’s application. An official from a different state reported receiving suggestions from Education officials to eliminate a certain reading program, adding that Education’s justification was that it was not aligned with SBRR. In another instance, state officials pointed out that they had adopted a program that was approved by other states, according to the procedures in their approved state plan, but were told by Education officials that it should be removed from their list and that Education would subsequently take a similar course of action with regard to those other states as well. Also, Education officials did not always rely on the criteria found in state plans as the basis for assessing compliance. We found, for example, one summary letter of findings from a monitoring report in which Education officials wrote that “Two of the monitored districts were implementing reading programs that did not appear to be aligned with scientifically based reading research.” Officials we spoke to in that state told us that they did not feel that they had been assessed on the basis of the procedures outlined in the state’s plan, but rather that the reading program itself was being called into question. The IG also found that Reading First officials communicated to several states against the use of certain reading programs or assessments, including Rigby and Reading Recovery. Officials from a few states also reported being contacted by Education regarding district Reading First applications or reading programs. For example, officials from four states reported being contacted by an Education official about a district application under consideration and one of those states also reported being approached by staff from one of the regional technical assistance centers or another contractor for the same reason. Officials from each of these states indicated that the reason they were contacted stemmed from the reading programs being used by the districts in question. In a few cases, state officials reported being contacted by Education officials regarding the state’s acceptance of a reading program or assessment that was not in compliance with Reading First. In one instance, state officials reported that Education contacted them outside of the normal monitoring process after they had obtained information from a national Reading First database maintained by a non- profit research organization that districts in the state were using a specific reading program. Five states also reported receiving recommendations from Reading First officials or contractors to change some of the professional development providers proposed in their original grant applications. When asked about the specific providers identified for elimination, three of the states indicated that the providers identified for elimination were in-state experts. In one case, a state was told that the review panel cited a lack of detail about the qualifications of the state’s proposed professional development consultants. We also found that while Education officials laid out an ambitious plan to annually monitor every state, they failed to develop written procedures guiding its monitoring visits. For example, Education did not establish timelines for submitting final reports to states following monitoring visits, specifically how and when state officials were expected to follow up with Education officials regarding findings from the monitoring visits. As a result, states did not always understand monitoring response procedures, timelines, and expectations. While we found that most state officials we spoke with understood that they were to be monitored with the use of their state plans as the criteria, they did not always understand what was required of them when responding to monitoring findings. For example, one state official reported being unaware that the state was supposed to respond to Education officials about findings from its monitoring report. An official from another state maintained that he/she was unclear about the process the state was to follow to respond to findings, and that no timeline for responding was provided to him/her. Furthermore, one state reported that findings were not delivered in a timely manner, and another state reported that Education did not address the state’s responses to the monitoring findings. Key aspects of an effective monitoring program include communicating to individuals responsible for the function any deficiencies found during the monitoring. The Reading First program, according to state coordinators, has brought about changes and improvements to the way teachers, administrators, and other education professionals approach reading instruction for children in at-risk, low-performing schools during the critical years between kindergarten and third grade. To assist states in implementing this large, new federal reading initiative, Education has provided a wide range of guidance, assistance, and oversight, that, for the most part, states have found helpful. However, Education failed to develop comprehensive written guidance and procedures to ensure that its interactions with states complied with statutory provisions. Specifically, Education lacked an adequate set of controls to ensure that Reading First’s requirements were followed, while at the same time ensuring that it did not intervene into state and local curricular decisions. We concur with the Education IG’s recommendations that the Department develop a set of internal procedures to ensure that federal statutes and regulations are followed and we feel it is important for the Secretary to follow up on these recommendations to ensure that they are properly implemented. Additionally, we feel it is important for the department to have clear procedures in place to guide departmental officials in their dealings with state and local officials. While Education’s stated approach was to rely on state plans as its criteria for enforcing Reading First’s requirements, states reported several instances in which it appears that Education officials did attempt to direct or endorse state and local curricular decisions. Such actions would prevent states from exercising their full authority under the law and would violate current statutory restrictions. Balancing Reading First’s requirements and the limits placed on the department requires Education to have clear, explicit, and well-documented procedures to guide its interactions with the states. Failure to do so places the department at risk of violating the law and leaves it vulnerable to allegations of favoritism. Additionally, while Education’s annual monitoring effort for Reading First is ambitious, it did not provide clear guidelines and procedures to states. As a result, states were not always aware of their roles and responsibilities in responding to findings of non-compliance, and Education was not always consistent in its procedures to follow up with states to resolve findings and let states know if they had taken proper actions. Key aspects of an effective monitoring program include transparency and consistency. Letting all states know in a timely manner whether or not their plans to address deficiencies are adequate is important to ensure that findings are dealt with in an appropriate, timely and clear manner. In addition to addressing the IG’s recommendations to develop internal (1) policies and procedures to guide program managers on when to solicit advice from General Counsel and (2) guidance on the prohibitions imposed by section 103(b) of the DEOA, we recommend that, in order to ensure that the department complies with statutory prohibitions against directing, mandating, or endorsing state and local curricular decisions, the Secretary of Education also establish control procedures to guide departmental officials and contractors in their interactions with states, districts, and schools. In addition, to help the department conduct effective monitoring of the Reading First program, we recommend that the Secretary of Education establish and disseminate clear procedures governing the Reading First monitoring process. In particular, Education should delineate states’ rights and responsibilities and establish timelines and procedures for addressing findings. We provided a draft of this report to the Department of Education and received written comments from the agency. In its comments, included as appendix III of this report, Education agreed with our recommendations and indicated that it will take actions to address them. Specifically, Education said it will provide written guidance to all departmental staff to remind them of the importance of impartiality in carrying out their duties and not construing program statutes to authorize the department to mandate, direct, or control curriculum and instruction, except to the extent authorized by law. On February 7, 2007, the Secretary of Education issued a memorandum to senior officers reminding them that it is important to maintain objectivity, fairness, and professionalism when carrying out their duties. The Secretary’s memorandum also emphasizes the importance of adhering to the statutory prohibitions against mandating, directing, and controlling curriculum and instruction, and strongly encourages managers to consult with Education’s Office of General Counsel early on to identify and resolve potential legal issues. Also, according to Education’s written comments on our draft report and the Secretary’s February 7, 2007, memorandum to senior officers, annual training will be required on internal controls and this training will address statutory prohibitions against mandating, directing or controlling local curriculum and instruction decisions. Regarding its monitoring process for Reading First, in its comments, Education said that it will develop and disseminate guidelines to states outlining the goals and purposes of its monitoring efforts, revise the monitoring protocols, and develop timelines and procedures on states’ rights and responsibilities for addressing monitoring findings. Education also included in its response a summary of its actions and planned actions to address recommendations from the department’s Office of Inspector General’s recent report on the implementation of the Reading First program. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. Copies will also be made available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective was to answer the following questions: (1) What changes have occurred to reading instruction since the inception of Reading First? (2) What criteria have states used to award Reading First sub-grants to districts, and what, if any, difficulty did states face in implementing the program? (3) What guidance, assistance, and oversight did Education provide states related to the Reading First program? To answer these questions, we collected both qualitative and quantitative information about the Reading First program from a variety of sources. We conducted a Web- based survey of the Reading First Directors in all 50 states and the District of Columbia. We also obtained and analyzed data from the Department of Education for each state on Reading First districts’ eligibility, applications and awards for states’ first school year of funding. The first school year of funding varied across states. Twenty-five states received their first year of funding in the 2002-2003 school year. Twenty-five states received their first year of funding in the 2003-2004 school year. To assess the reliability of this data, we talked to agency officials about data quality control procedures and reviewed relevant documentation. We excluded two states because of reporting inconsistencies, but determined that the data for the other states were sufficiently reliable for the purposes of this report. We also conducted semi-structured follow-up interviews with Reading First Directors in 12 states, mostly over the telephone. We conducted site visits to 4 of the 12 states. During the site visits, we met with state officials, local program administrators, and state-level technical assistance providers, as well as school officials from individual schools, including teachers, principals, and Reading First coaches. In identifying local sub-grant recipients to meet with in each state, we sought to incorporate the perspectives of urban, rural, and suburban school districts. We selected the 12 states to have diversity in a variety of factors, including geographic distribution, grant size, poverty rate, percentage of students reading at or below grade level, urban and rural distinctions, the presence of a statewide list of approved reading programs, and whether states had reported that they received guidance from Education officials advocating for or against particular reading programs or assessments. For both the survey and follow-up interviews, to encourage candid responses, we promised to provide confidentiality. As a result, state survey responses will be provided primarily in summary form or credited to unnamed states, and the states selected for follow-up interviews will not be specifically identified. Furthermore, in order to adequately protect state identities, we are unable to provide the names of particular reading programs or assessments Education officials or contractors suggested a state use or not use. We did not attempt to verify allegations made by state or local officials in their survey responses or during interviews or otherwise make any factual findings about Education’s conduct. We also visited or talked with administrators from each of the three regional Reading First Technical Assistance Centers, located in Florida, Texas and Oregon, as well as RMC Research, the federal contractor tasked with administering the contract with the technical assistance centers. We also interviewed several publishers and other providers of reading curricula and assessments, to obtain their views about changes Reading First has prompted in states, districts, and schools. We chose these providers to reflect the perspectives of large, commercial reading textbook programs that are widely represented nationwide on states’ lists of approved programs, as well as some other selected providers of reading curricula, including some that have filed complaints related to Reading First. We also interviewed Education officials about the implementation of the Reading First program. To obtain a better understanding of state program structure, as well as the nature of interactions between Education officials and state grantees, we reviewed state grant files, monitoring reports, and related correspondence for the 12 states where we conducted follow-up interviews. In addition, we reviewed NCLBA language authorizing Reading First, as well as statements of work articulating the responsibilities of the regional technical assistance centers and the contractor tasked with providing assistance to states in conducting local sub-grant competitions. We conducted our work from December 2005 through January 2007 in accordance with generally accepted government auditing standards. To better understand state implementation of the Reading First program, we designed and administered a Web-based survey of the Reading First Directors in all 50 states and the District of Columbia. The survey was conducted between June and July 2006 with 100 percent of state Reading First Directors responding. The survey included questions about curriculum; professional development; and state Reading First grant eligibility, application, award, and implementation processes. The survey contained both closed- and open-ended questions. For the open-ended questions, we used content analysis to classify and code the responses from the states such as the publishers on states’ approved lists. We had two people independently code the material, then reconciled any differences in coding. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pre- testing draft instruments and using a Web-based administration system. Specifically, during survey development, we pre-tested draft instruments with one expert reviewer and Reading First Directors in four states during April and May 2006. In the pre-tests, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure definitions used in the survey were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. On the basis of the pre- tests, the Web instrument underwent some slight revisions. A second step we took to minimize nonsampling errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. Reading programs under Reading First must include rigorous assessments with proven validity and reliability. Assessments must measure progress in the five essential components of reading instruction and identify students who may be at risk for reading failure or who are already experiencing reading difficulty. Reading programs under Reading First must include screening assessments, diagnostic assessments, and classroom-based instructional assessments of progress. Bryon Gordon, Assistant Director, and Tiffany Boiman, Analyst-in-Charge, managed this engagement and made significant contributions to all aspects of this report. Sonya Phillips, Sheranda Campbell, Janice Ceperich, and Andrew Huddleston also made significant contributions. Jean McSween provided methodological expertise and assistance. Sheila McCoy and Richard Burkard delivered legal counsel and analysis. Susannah Compton, Charlie Willson, and Scott Heacock assisted with message and report development.
|
The Reading First program was designed to help students in kindergarten through third grade develop stronger reading skills. This report examines the implementation of the Reading First program, including (1) changes that have occurred to reading instruction; (2) criteria states have used to award sub-grants to districts, and the difficulties, if any, states faced during implementation; and (3) the guidance, assistance, and oversight the Department of Education (Education) provides states. GAO's study is designed to complement several studies by Education's Inspector General (IG) in order to provide a national perspective on some of the specific issues being studied by the IG. For this report, GAO administered a Web-based survey to 50 states and the District of Columbia, and conducted site visits and interviews with federal, state, and local education officials and providers of reading programs and assessments. States reported that there have been a number of changes to, as well as improvements in, reading instruction since the implementation of Reading First. These included an increased emphasis on the five key components of reading (awareness of individual sounds, phonics, vocabulary development, reading fluency, and reading comprehension), assessments, and professional development with more classroom time being devoted to reading activities. However, according to publishers we interviewed, there have been limited changes to instructional material. Similarly, states report that few changes occurred with regard to their approved reading lists. States awarded Reading First sub-grants using a variety of different eligibility and award criteria, and some states reported difficulties with implementing key aspects of the program. After applying federal and state eligibility and award criteria, Education reported that over 3,400 districts were eligible to apply for sub-grants in the states' first school year of funding. Of these districts, nearly 2,100 applied for and nearly 1,200 districts received Reading First funding. Education officials made a variety of resources available to states during the application and implementation processes, and states were generally satisfied with the guidance and assistance they received. However, Education developed no written policies and procedures to guide Education officials and contractors in their interactions with state officials and guard against officials mandating or directing states' decisions about reading programs or assessments, which is prohibited by the No Child Left Behind Act (NCLBA) and other laws. Based on survey results, some state officials reported receiving suggestions from Education officials or contractors to adopt or eliminate certain reading programs or assessments. Similarly, the IG reported in September 2006 that the Department intervened to influence a state's and several school districts' selection of reading programs. In addition, while Education officials laid out an ambitious plan for annual monitoring of every state's implementation, they did not develop written procedures guiding monitoring visits and, as a result, states did not always understand monitoring procedures, timelines, and expectations for taking corrective actions.
|
The broad objectives of PFM are to achieve overall fiscal discipline, allocation of resources to priority needs, and efficient and effective allocation of public services, according to OECD. While donors may use different definitions of PFM, most definitions focus on a country’s budget cycle, the process used to manage public resources. The budget cycle centers around four main phases: (1) budget formulation, (2) budget execution, (3) accounting and reporting, and (4) external oversight (see fig. 1). OECD states that PFM includes all components of a country’s budget cycle. A budget cycle starts with the budget formulation process, in which the government, often with legislative oversight, plans for the use of the coming year’s resources in accordance with policy priorities. After the government approves the budget and the new fiscal year begins, programming agencies and the ministry of finance, or appropriate entity, are responsible for executing the budget. They use the resources allocated to them for items such as salaries for public servants, operating costs for their offices, and goods and services delivered to their beneficiaries. The ministry of finance, or equivalent, manages the flow of funds and monitors and makes in-year adjustments to help ensure compliance with the budget and PFM rules. Throughout the fiscal year, each programming agency is to account for and record its expenditures. The ministry of finance centrally consolidates these accounts. At the end of the fiscal year, the ministry of finance, or equivalent, issues an accounting report that demonstrates how the budget was implemented. External or independent entities, such as a country’s supreme audit institution, may review this report. The audit institution reviews the government’s revenue collection and spending and issues its own statement on the execution of the budget and the strength of the PFM systems. In many countries, the institution presents this audit report to an appropriate government entity for further scrutiny and action. PFM processes involve a number of governmental entities. While the ministry of finance is generally the focus of a country’s PFM system, PFM extends to all public ministries that are charged with delivering services or have spending authority. Civil society, donors, and oversight institutions, such as a country’s supreme audit institution, also help to ensure the proper management of public funds through external scrutiny and review. Broad PFM assessments conducted by international organizations report improvements in countries’ PFM systems, as well as continued weaknesses. Several international organizations, including the Public Expenditure and Financial Accountability (PEFA) program, World Bank, International Budget Partnership, and Transparency International, have developed assessment tools to assess broad aspects of PFM systems.These assessments have highlighted varying levels of progress in improving PFM systems around the world, and the last two have identified relatively low levels of transparency in many countries’ budgets and high levels of corruption, respectively. More specifically: PEFA: A 2010 monitoring report on the PEFA program, released by the PEFA Secretariat, found that while PFM systems were improving overall, progress varied among elements of countries’ systems. The PEFA secretariat’s analysis, based on a comparison of 33 repeat PEFA assessments from 2005 through 2010, showed that more countries had a higher number of improved versus worsened scores, indicating a broad trend of PFM improvement across the countries surveyed. According to the analysis, country actions taken to strengthen PFM features in the earlier stages of the budget cycle, such as planning, are more likely to improve or maintain a high score than actions taken later in the budget cycle, such as control and oversight of actual spending. World Bank: According to our analysis of the World Bank’s quality of budgetary and financial management indicator scores from 2005 through 2010, slightly more than one-third of the countries assessed showed improvements in the quality of their PFM systems, while one- quarter of the countries showed a worsening in the quality of their PFM systems. International Budget Partnership: In its 2010 open budget survey of 94 countries, the International Budget Partnership concluded that the state of budget transparency was poor. Only about 21 percent of the countries surveyed had open budgets while 44 percent of countries provided limited to no budget information. Nonetheless, the survey found the trend toward open budgets was favorable based on substantial improvements in budget transparency, especially among countries that had provided little information in the past. Some of these governments achieved improvements by simply making budget documents available on their websites. Transparency International: The organization’s Corruption Perceptions Index showed that nearly three quarters of the 178 countries in the index scored below 5, on a scale of 10 (very clean) to 0 (highly corrupt), indicating what it categorizes as a serious corruption problem. For further information on each organization’s assessment tools and selected results, including the percentile rankings of our six case study countries in selected PFM diagnostic tools, see appendix II. Donors and recipient governments have increased their attention to strengthening PFM systems, recognizing that strong and effective PFM systems underpin fiscal and macroeconomic stability, guide the allocation of public resources to national priorities, support the efficient delivery of services for poverty reduction and economic development, and make possible the transparency and scrutiny of public funds. In 2003, OECD’s Development Assistance Committee established a Working Party on Aid Effectiveness, which has played a critical role in establishing an initial international donor coordination framework and setting goals for donors and aid recipient countries to strengthen PFM. The working party has sponsored four global development forums since 2003. The second forum, held in Paris in 2005, resulted in the Paris Declaration on Aid Effectiveness which included broad commitments by recipient governments and donors to strengthen PFM systems and use those systems as appropriate. More than 100 countries and aid agencies, including the United States, endorsed the declaration. By signing the declaration, recipient governments made clear commitments to strengthen their systems to the maximum extent possible, and donor governments made clear commitments to use those systems wherever possible. These commitments were renewed and refined in the subsequent forum in Accra, Ghana, in 2008. The most recent forum was held in Bussan, South Korea, in 2011. (See table 1.) USAID and Treasury are the two main U.S. agencies providing PFM- related assistance. Combined, the two agencies have PFM-related projects in 70 countries in all regions of the world, as shown in figure 2. State also conducts some PFM-related activities, although it has not funded programs. See appendix III for details on State’s PFM activities. USAID provides PFM capacity-building activities through its development programs and is seeking to provide more of its assistance through recipient countries’ financial systems. Capacity-building activities to strengthen PFM systems are typically part of broader democracy and governance (DG) or economic growth (EG) programs. PFM activities that are included as components of DG programs typically address the areas of legislative function and processes, public sector executive function, local government decentralization, and anticorruption reforms. PFM activities included as components of EG programs typically address fiscal and monetary policy issues. USAID has identified DG and EG programs with PFM components in over 60 countries since 2007. However, according to a USAID official, USAID cannot determine the total funding for PFM activities because it does not collect data at a sufficiently detailed level to precisely identify PFM activities. Therefore, the official reported that USAID is unable to separate PFM assistance from other assistance in larger DG and EG development programs. In 2011, total funding for DG programs was $1.7 billion, and for EG programs, $4.2 billion.appendix IV. For illustrative examples of USAID PFM-related projects, see In addition to PFM capacity-building programs, USAID is seeking to increase the use of recipient country PFM systems to deliver assistance. As part of its implementation and procurement reforms under its institutional reform agenda, known as USAID Forward, USAID has announced an agency target of obligating 30 percent of its annual assistance through local systems, including both partner country PFM systems and local not-for-profit and for-profit organizations, by fiscal year 2015. Treasury’s OTA provides technical assistance through advisors who work in-country within the finance ministry, the central bank, or other government entities. OTA’s program consists of five core areas, as follows: Budget and Financial Accountability helps foreign governments reform and strengthen their PFM systems in order to promote control, accountability, and transparency over resources, and to improve a country’s overall financial condition. Banking and Financial Services promotes strong financial sectors in which institutions are well regulated, stable, and accessible; serve as efficient intermediaries between savers and investors; and are resistant to criminal activity. Government Debt Issuance and Management helps host countries implement sound public debt management practices and develop markets through which the government can finance itself. Economic Crimes helps counterpart governments build their capacity to prevent, detect, investigate, and prosecute complex financial crimes. Revenue Policy and Administration provides assistance to ministries of finance and other relevant organizations that strengthens their ability to serve the country and its people through the efficient and responsible collection of revenues. OTA provides both long-term resident and intermittent advisors. Long- term resident advisors provide advice and training to ministers of finance, central bank governors, and other government officials. Country engagement typically lasts between 2 and 10 years, according to an OTA official. Intermittent advisors provide highly specialized technical assistance in support of long-term projects, projects requiring several different specialties, and projects of short duration. According to an OTA official, they currently have about 70 advisors in roughly 50 countries. In 2011, OTA’s total funding was $44.5 million, with $25.5 million in direct OTA appropriations and $19 million in transfers from other agencies, including State, USAID, and the Millennium Challenge Corporation. For illustrative examples of OTA technical assistance projects, see appendix IV. USAID is implementing new processes for developing programs that reflect new agency reform priorities to increase the use of country systems to deliver U.S. assistance. USAID’s work in PFM has traditionally involved capacity building under broader programs designed to improve fiscal management and promote good governance. According to agency officials, USAID has increased its attention to PFM issues. USAID’s new country strategy development and project design processes include various analyses and assessments that may identify opportunities to strengthen and use countries’ PFM systems, as prioritized by USAID’s reform agenda. In these new processes, USAID may identify the need for PFM assistance during countrywide development assessments or through other required assessments. Furthermore, according to USAID guidance, country offices are to develop these assistance programs in collaboration with country stakeholders throughout the program development process. USAID’s new processes are similar in structure and approach to prior processes, but, according to USAID, aim to incorporate more analytical rigor at all stages of the strategic planning framework so that USAID’s efforts are better aligned with the recipient country’s development efforts. USAID may identify the need for PFM assistance during the country strategy development process. According to USAID, most country offices are required to develop a Country Development Cooperation Strategy, a 5-year strategy document, by the end of fiscal year 2013. USAID’s latest draft of the guidance for developing the country strategy, released in September 2011, states that the strategy should demonstrate how it is integrating the goals of USAID’s assistance reform effort, USAID Forward, such as working through host-country systems and developing the capacity of civil society and private sector partners. In developing the country strategy document, the USAID country office is to consult with various country stakeholders and conduct several assessments to understand the development context and develop goals and objectives. According to the guidance, the country office is required to develop the strategy document with a focus on maximizing the impact of USAID resources and build the capacity of specific institutions and related governance systems at the national, regional, and local levels. For example, if a USAID country office determines that a nontransparent and inefficient financial system is a key obstacle to economic growth, the country office is to work with the host government to improve its capacity for sound financial management and equitable allocation of resources. According to USAID officials, one indicator of need for a PFM project would be an assessment that the country has a major fiscal imbalance that needs to be corrected by a combination of increased revenue mobilization or reduction of budget expenditures. Another indicator would be a determination, arrived at through repeated reporting on corruption in the media, or internal and external publications, and supported by stakeholder interviews, that anticorruption programs would improve PFM. The goals and objectives outlined in the country strategy are to provide the basis for project design, monitoring, and evaluation. As of June 2012, USAID stated it had approved 15 country strategies, and 73 USAID country offices are scheduled to complete a strategy by October 2013. See figure 3 for highlights of the elements of USAID’s new strategy and project design processes that could identify the need for PFM capacity building. According to the new USAID guidance on project design, after USAID approves the strategy document and identifies the need for PFM-related assistance, USAID country offices are to begin the project design phase. This phase is to begin with the formation of a design team that is responsible for the project’s development from planning to implementation. Overall, the project design phase is to comprise three stages, as follows: Conceptual stage. During the conceptual stage, the project design team is to conduct stakeholder outreach, assessments, analysis, and implementation planning. This stage results in a concept paper, which provides a summary of a proposed project that country-office management can review to assess how the project aligns with the country strategy, the likelihood of success, and the assumptions underlying project success. After USAID management approves the concept paper, the design team transitions to the analytical stage. Analytical stage. During the analytical stage, the design team seeks to understand the problem or constraints identified during the conceptual stage, and identify and assess critical assumptions. The team is to conduct a series of targeted assessments, including required gender, environmental, and sustainability analyses and other social, political, and institutional analyses. Of these analyses, we identified two that pertain directly to PFM systems, as follows: Sustainability analysis. The sustainability analysis assesses the partner government’s ability to manage the continuation of the project after the project has concluded. According to USAID guidance, to build sustainability into a project, the design should consider how the country office will increase the skills and capacity of local stakeholders involved in maintaining development gains after the project ends—as well as how USAID will ensure that relevant activities or services are gradually tied to sustainable financing models through private sector participation or through sustainable, publicly managed arrangements and government processes. Institutional analysis. An institutional analysis is an in-depth assessment of the local institutions and systems most critical to the implementation of the project’s development interventions, including an assessment of the quality of their leadership, structure, and staff; and identification of their administrative and financial management strengths and weaknesses. Using the analysis, USAID is to develop a plan for project activities that are necessary and sufficient to bring these institutions up to the level of performance or engagement as partners appropriate for their roles in the project’s implementation and their eligibility for direct USAID funding. According to USAID guidance, the analytical stage results in a project authorization document that will be the basis for project implementation, adaptation, and evaluation, which includes a summary of the analyses that underlie the rationale for the project design. Approval stage. A successful project design results in an approved project authorization, which enables a project to move from the planning stage to implementation. The project authorization sets out the purpose and duration of the project, defines fundamental terms and conditions of the assistance when a partner country agreement is anticipated, and approves an overall total budget level, subject to the availability of funds. In addition to the above analyses, USAID has developed a PFM risk- assessment framework (PFMRAF) to measure the fiduciary risk, or the risk of funds being misspent or mismanaged, when USAID plans to provide aid through the country’s finance systems. USAID guidance commits country offices to offer, if appropriate, a USAID assessment of partner country PFM systems with the goal of providing funding for project implementation through the use of those systems. If the offer is accepted, the assessment must be carried out using the PFMRAF. Whenever possible, USAID should begin conducting the PFMRAF during the conceptual stage of the project design process. The PFMRAF consists of the following five stages: 1. The Rapid Appraisal provides a high-level snapshot of country-level fiduciary risks associated with the use of partner country PFM systems and helps inform decisions on whether to undertake a more rigorous, formal risk assessment. 2. The Risk Assessment, Analysis, Mitigation, and the Approval for Use of Partner Country Systems establishes the baseline level of risk corresponding to contemplated funding levels and identifies vulnerabilities of the partner country PFM sector in which USAID is considering use of the system for project implementation. USAID determines whether any systemic risk can be reasonably mitigated and, if so, what kind of mitigating measures might be introduced to reduce the risk. USAID establishes an accountability framework with a set of conditions that would, if complied with, constitute formal approval for the use of a partner country PFM system. 3. The Project Design, Approval, Designation of Responsibilities, and Selection of the Funding Mechanism incorporates the approval of use of country systems into the project design, includes risk mitigation measures, such as capacity-building technical assistance, concurrent audits, and disbursements in tranches, as appropriate, and the appropriate funding mechanism. 4. Negotiating and Preparing the Bilateral Project Agreement with the Partner Country Government involves the preparation of the bilateral agreement in accordance with the risk mitigation measures outlined in the approval for use of partner country systems and in collaboration with the partner country government. 5. Implementation, Monitoring, and Evaluation occurs once both countries agree to the bilateral agreement and as outlined in the project design. According to USAID, the country office is to incorporate recommendations on actions to mitigate identified risks to providing assistance directly to the host government into various stages of the project design. According to one USAID official, the country office’s decision to implement the recommended steps for mitigating risk may depend on the availability of funds. According to USAID, if risks cannot be mitigated, USAID is not to deliver assistance through the partner country’s financial systems. Mitigation steps may include adding a PFM component to a project, such as technical assistance to improve an aspect of the financial system. As of June 2012, USAID had completed the rapid assessment in 20 countries and was in the process of finalizing the report for an additional 12. For 4 of the countries for which USAID had completed the rapid assessment, USAID had decided to either delay or not to proceed to stage 2 for various reasons, including funding cuts and the political situation in the country. For the second stage of the PFMRAF, USAID had completed the risk assessment for 3 countries and was in the process of conducting the assessment for 11 others. USAID had completed four stages of the PFMRAF for 3 countries. Table 2 contains a summary of the status of countries where USAID has begun the PFMRAF process. Stage 2 on hold or no plan to continue to stage 2 in four of these countries. Treasury OTA’s processes for developing PFM programs involve collaboration with country government officials whom Treasury has determined have a strong commitment to reform and are seeking to develop strong PFM, as well as OTA’s assessment of the country system. OTA receives requests for assistance from foreign governments, other U.S. government agencies, U.S. embassies, international organizations, and OTA advisors and other donors already working in countries on other projects. According to OTA officials, OTA receives two to three requests for assistance per month, but due to the agency’s limited financial resources and the financial commitment required for each project, it selectively chooses its projects. OTA’s principles for providing assistance are as follows: work with partners committed to ownership of their reform; advocate self-reliance by providing countries with knowledge and skills required to generate and manage their own resources and reduce dependence on international aid; and work with ministry staff daily to build capacity through mentoring and on-the-job training on PFM practices. After receiving the request, OTA begins a preliminary assessment process to identify weaknesses and areas for assistance, according to OTA officials. The preliminary assessment is primarily a review of available information on the country’s PFM system, which may include information from the U.S. embassy, as well as relevant country reports and assessments prepared by international financial institutions, other bilateral assistance providers, and nongovernmental organizations. After reviewing documents, if it decides to proceed with the assessment, OTA completes a more in-depth, in-country assessment of PFM, which consists of meetings with U.S. embassy staff, other donors, and relevant ministry officials, including the minister of finance or the head of the central bank, as well as high-level policy staff and mid-level supervisors. According to OTA officials, the purpose of the in-country assessment is to review the structural issues related to the country system and determine whether the country has dedicated staff committed to working toward their reform efforts. At any point during the assessment process, OTA may determine that OTA assistance is not suitable for the country’s needs. OTA officials told us that in one such case, government officials requested that OTA manage their budget for them, which is against OTA’s principles because doing so would not promote self-reliance or help the country develop the capacity to generate and better manage its own revenues. Finally, OTA officials told us that their decision on whether to provide assistance depends on whether sufficient funding is available. The in-country assessment typically results in a draft of official terms of reference, which identifies weaknesses to be addressed during OTA’s engagement. According to OTA officials, the terms of reference describes the broad goals of the project and represents agreement between Treasury and the host-country counterpart, the ministry staff with whom the advisor will work on a daily basis. For example, the 2010 terms of reference for a budget project in Honduras identified the following four agreed-upon areas for assistance: improve operational efficiency and enhance accountability by strengthening the organization of the ministry of finance; move toward compliance with International Public Sector Accounting enhance capacity to conduct fiscal analysis and produce more reliable macroeconomic estimates and revenue and expenditure projections; and conduct workshops on basic finance methods and terminology. The terms of reference also identifies the advisor’s host-country counterparts and agreements regarding each party’s responsibilities. According to OTA officials, after OTA fully vets a project and secures funding, the parties finalize and sign the terms of reference. However, OTA and its counterparts may seek to revise the terms of reference if the central purpose of the technical assistance project changes or during times of transition, such as when a finance minister changes. In Honduras, the 2010 terms of reference replaced an existing OTA project following a gap in OTA assistance during a political crisis in the Honduran government in 2009. The OTA advisor assigned to the country uses the broad goals expressed in the terms of reference to develop a work plan. The work plan contains specific objectives, milestones, and planned completion dates designed to work toward goals agreed upon in the terms of reference, and the plan must be developed within 60 days of the advisor arriving in-country. Each work plan covers a 1-year period and is the primary basis for regular monthly progress reports to OTA headquarters. OTA considers work plans, and the monthly reports on which they are based, to be dynamic documents that reflect the project’s progress in real time, and advisors can and should change or modify the work plan during the course of a project as appropriate and in consultation with the ministry staff and OTA management. Moreover, according to OTA officials, a 1-week in-country assessment may not provide all of the information needed and may require that the advisor rewrite the plan after a few months in the country. While the work plan is considered a management tool to monitor projects, OTA officials told us that in recent years it has also become a joint document between the OTA advisor and ministry staff as a shared tool to monitor and discuss progress. To monitor PFM-related programs, USAID develops performance management plans, reviews periodic progress reports, and conducts site visits, among other activities, but reviews of USAID programs have identified agency-wide weaknesses in implementation, including using unreliable baseline data and inaccurate reporting of results. USAID develops the monitoring and evaluation framework during the project design, and it generally is to include the following: A performance management plan: This is a tool to plan and manage the process of monitoring, evaluating, and reporting progress toward achieving the project’s development objectives. The performance management plan includes performance indicators and targets that link to the project objectives. A work plan: This specifies activities to be undertaken and the proposed schedule for these activities during the life of the project. Periodic progress reports: USAID assistance agreements with nongovernmental organizations generally also require implementing partners to submit periodic progress reports, the frequency of which vary depending on the assistance agreement, but these reports may not be required more frequently than quarterly or less frequently than annually. The progress reports should generally contain a comparison of actual accomplishments with the goals and objectives established for the period and reasons why established goals were not met, if appropriate. Implementing partners should immediately notify USAID of developments that have a significant impact on the award- supported activities. Site visits: USAID guidance states that site visits are conducted as needed by the technical officers, who are responsible for monitoring. USAID works with the implementing partner to produce and approve the work plan and the performance management plan. USAID’s technical officer approves the plans. USAID guidance also requires staff to conduct at least one portfolio review each year that covers all activities included in their various programs. These reviews determine, among other things, whether the activities are achieving the desired results. Prior reviews of USAID programs have identified challenges in the agency’s implementation of its monitoring processes across many types of programs, including DG and EG programs. In its fiscal year 2011 memo on management and performance challenges, USAID’s Inspector General identified specific monitoring-related weaknesses as one of USAID’s most serious management and performance challenges. The USAID Inspector General reported problems with assistance planning in 25 out of 80 performance audits conducted in fiscal year 2011. Assistance planning is important because it provides the means for program implementers to track progress toward program objectives and helps to ensure that USAID assistance programs achieve planned results. In addition, 37 of the 80 audit reports the Office of the Inspector General issued in fiscal year 2011 identified cases in which USAID operating units or their implementing partners reported misstated, unsupported, or unvalidated results. In recent reports, we also identified deficiencies in USAID’s monitoring practices, including the lack of an integrated plan for monitoring and evaluating nonemergency food aid, monitoring practices that do not correspond to agency performance guidelines, difficulties in developing meaningful outcome indicators related to building trade capacity, undocumented site visits for assistance programs in Burma, and lack of performance targets and baseline data for indicators related to PFM efforts in Afghanistan. Moreover, in our review of the fiscal year 2011 and 2012 Inspector General’s audits of USAID’s DG and EG programs, we found monitoring weaknesses cited in 20 of the 32 audit reports. monitoring weaknesses, including unreliable or nonexistent baseline data; performance data weaknesses, such as results that were not reported, lack of data verification, or inaccurate reporting of data; lack of a current performance management plan; and inadequate monitoring of program activities, including lack of regular submission of progress reports. With regard to our three case study countries, we found that the country offices generally applied USAID’s monitoring processes in all three PFM- related programs; however, we did find some conditions that could make monitoring more difficult, as described below. Our review was not intended to be comprehensive or applicable to all USAID PFM-related programs. USAID’s DG and EG offices are the primary offices providing PFM-related assistance. Performance management plans: All three programs were clearly defined in terms of overall objectives, project objectives, tasks, expected results or outcomes, and required plans and reports. All three programs—the Kosovo Growth and Fiscal Stability Initiative, the Peru ProDecentralization Project, and Liberia’s Governance and Economic Management Support Program—had performance management plans containing objectives and indicators. Two of the three performance management plans also included targets. However, the targets for Liberia’s program had not been determined 9 months into the program. Work plans: All three programs also had work plans that specified tasks and timelines. However, Liberia’s Governance and Economic Management Support Program initially operated without an approved work plan, although it did establish an action plan to guide its activities. Periodic progress reports: Assistance agreements for all three USAID programs required quarterly progress reports, and the implementing partners submitted all reports according to agreed-upon time frames. In our review of the three programs, we found that the language used to describe the objectives and results in the quarterly progress reports did not correspond to the language in the work plans, which could make it difficult to track progress against the work plans’ objectives. For example, the quarterly reports for the Kosovo Growth and Fiscal Stability Initiative Program describe progress on five separate lines of effort under the objective to improve fiscal stewardship, while the statement of work and work plan discuss only three lines of effort under the same objective. Site visits: USAID officials responsible for monitoring progress on all three projects said that they do not rely solely on progress reports to track progress, and that additional monitoring tools included weekly progress meetings, frequent site visits, and telephone and e-mail communications with implementers. For example, the USAID official in Kosovo reported that he works on a daily basis with the implementing partners who produce the quarterly and annual reports and that they exchange e-mails and phone conversations daily. See appendix IV for descriptions of USAID PFM-related assistance programs from our case study countries. In addition to USAID’s reported program monitoring weaknesses, we found that the agency does not have a process to fully identify and measure its use of country systems. USAID has set an agency target of obligating 30 percent of the agency’s annual assistance through local systems, including both partner country PFM systems and local not-for- profit and for-profit organizations, by fiscal year 2015. currently cannot track progress toward its 30 percent target because its accounting system cannot identify the full range of such assistance, which includes a variety of implementing mechanisms from host-country contracting to direct cash transfers. The project data in USAID’s accounting system does not provide sufficiently detailed information, such as the location of the organization receiving assistance, to identify qualifying assistance. According to a senior USAID official, the agency is working on a system whereby it will tag each entity receiving assistance with an identifier, such as government or not-for-profit organization, as well as a vendor location signifying U.S.-based or non-U.S. based. According to the official, this tagging process will facilitate identifying use of country systems. However, USAID’s financial system currently cannot distinguish whether non-U.S. based not-for-profit and for-profit organizations are located in the host country or in a third-party country. USAID’s process to assess the effectiveness of its PFM-related programs involves independent evaluations, but weaknesses in the agency’s overall evaluation practices have been reported. USAID may use independent external evaluations in the middle or at the end of a program based on the need to inform program decisions. USAID adopted a new evaluation policy in January 2011 and updated its guidance in February 2012 to reflect it. The new evaluation policy requires that all large projects have at least one evaluation and that evaluations use methods that generate the highest-quality and most credible evidence, including experimental The policy also states that 3 percent of program budgets methods.should be devoted to external evaluation and classifies two types of evaluations: performance evaluations, which focus on descriptive and normative questions and other questions pertinent to program design, management, and operational decision making; and impact evaluations, which define a counterfactual (what would have happened had the program not been implemented) to control for external factors and measure the change in a development outcome that is attributable to a defined intervention. USAID reported that experimental methods generate the strongest evidence for impact evaluations; however, experimental methods and impact evaluations may be difficult to apply to PFM capacity-building efforts. For example, USAID evaluations of PFM programs have highlighted difficulties associated with conducting impact evaluations, including lack of data or resources, and too short a time period to identify impact. A February 2012 USAID report on the first year of implementation of the new evaluation policy noted that USAID had not yet completed an evaluation under the new policy. USAID adopted the new policy to address a number of weaknesses it had identified in its evaluation practices. USAID reported that the number of evaluations submitted to USAID’s Development Experience Clearinghouse—its main repository for agency evaluations—decreased from nearly 500 in 1994 to approximately 170 in 2009 despite an almost three-fold increase in program dollars managed. Furthermore, according to USAID, the majority of evaluations in recent years relied heavily on anecdotal information and expert opinion, rather than systematically collected evidence. We have also reported weaknesses in USAID evaluation practices in several areas in recent years, including not planning for evaluations or not using evaluation results. Furthermore, the weaknesses in performance indicators that the USAID IG identified in DG and EG programs could create difficulties for evaluating PFM programs. For example, weaknesses in baseline data or performance indicators would make it difficult to use quantitative measures to evaluate these programs. These evaluations were not subject to USAID’s new evaluation policy, which was issued in January 2011. evaluation of the Economic Management for Stability and Growth program and found that the program improved the government’s institutional capacity in fiscal and monetary policy.did not document its evaluation methodology. Treasury’s OTA uses various processes to monitor PFM-related programs, including narrative monthly and trip reports, annual quantitative performance measures, and customer feedback surveys, but weaknesses exist in some of its evaluation processes. The work plan is the key document OTA uses to monitor a project. OTA uses monthly reports to document and monitor progress toward the milestones identified in the annual work plan, including changes to milestones or timelines, and to facilitate communication between the resident advisors and headquarters. OTA advisors may also share monthly reports with host-country counterpart institutions, U.S. embassy staff, other Treasury bureaus and offices, and other interested partners such as USAID, the Millennium Challenge Corporation, and relevant international financial institutions to monitor progress and coordinate donor assistance as relevant. In addition to updates on work-plan progress, the monthly reports may also include other information such as activities completed; significant meetings held; and important political, social, and economic developments. OTA also uses monthly reports to track changes made to the work plan agreed to by the advisor and his or her counterpart. For example, an OTA budget project in Cambodia listed implementation of program budgeting as one its primary objectives and improving budget classification as one its secondary objectives. In the monthly reports, the resident advisor documented the need to refocus the reform efforts on budget classification to better align with the Cambodian government’s ongoing PFM reform efforts. Further, in the case of OTA’s 2011 Honduras budget project to implement international public sector accounting standards, the objectives remained unchanged, but milestones were changed in the monthly reports. According to the advisor, some milestones had to be extended due to overly ambitious initial expectations that were purposely established to encourage the ministry staff to undertake reforms. See appendix IV for descriptions of OTA-funded PFM projects in our case study countries. OTA headquarters also maintains regular communication with its advisors. Senior OTA officials monitor project performance through regular contact with advisors in the field by e-mail, telephone, and site visits. One advisor told us that their director would already be aware of any issues raised in the monthly reports, given the frequency of their communications. Resident advisors also told us that senior Treasury staff visited Cambodia and Honduras to review OTA projects and talk with counterparts. Following site visits, OTA officials are required to prepare trip reports. For example, in a trip report from November 2011, a senior OTA official reported that he met with host-country counterparts to discuss the four current OTA engagements in Cambodia, including the budget project. The visit confirmed the reported difficulties in implementing the government financial management information system as a result of a procurement delay, but also affirmed OTA’s role in its eventual implementation. OTA also uses voluntary customer surveys to collect feedback on OTA projects from the host-country counterpart. For example, a Honduran government official who completed the voluntary customer feedback survey for the budget project indicated that the project met expectations and made a significant contribution by strengthening the host-country staff’s technical capacity through the training sessions that OTA staff offered. We found that OTA applied its monitoring processes to all three of our case study country projects, but monthly reports lacked some details on progress. We reviewed OTA’s monitoring activities in three PFM-related projects to examine how OTA applied its monitoring process to these projects. Our review was not intended to be comprehensive or applicable to all OTA projects. We found the projects to be clearly defined with specific objectives, milestones, proposed completion dates, and regularly submitted monthly reports. In addition, in all three cases, the resident advisor reported frequent communication on project progress with headquarters. However, in two of the case study countries, Cambodia and South Africa, work plans had major changes in objectives, but only Cambodia documented its change in a monthly report. Additionally, our review found that the level of detail of the monthly report narratives on progress varied, with a few reports having little detail, which could make tracking progress for these specific projects difficult without other communication between the advisor and OTA management. However, as noted above, OTA headquarters maintained frequent communication with advisors in the field. OTA’s quantitative performance measures have been a useful project- level indicator of performance but have not been a useful indicator of OTA-wide performance because of conceptual problems and errors introduced by OTA when aggregating the performance data. OTA advisors and management score technical assistance projects annually based on project-specific indicators under four main categories. The projects are scored on a scale of one (lowest) to five (highest) for “traction,” or the degree of engagement with host-country counterparts, and “impact,” or the results of project activities that bring about change in counterpart law, systems, processes, and procedures. OTA officials told us that the traction and impact measures also reflect the language and values of OTA, and are useful to management, advisors, and host- country counterparts. The primary purpose of the quantitative performance scores was to respond to an Office of Management and Budget reporting requirement for evaluation data. While OTA reported that it met or exceeded targets for traction and impact set in 2008, in every year from 2009 through 2011, OTA officials acknowledged that the aggregate values associated with its annual goals were of limited value due to lack of comparability across programs and over time. In previous work on the Office of Management and Budget, we highlighted the difficulty of representing program performance with a single rating. Using a single rating can force agencies to simplify more nuanced and complex performance results, a circumstance similar to OTA’s aggregation of traction and impact scores. A senior official told us that OTA, in complying with the OMB requirement, designed the measures to be as useful as possible at the project level. For example, OTA uses the project-level traction and impact measures in setting expectations and discussing progress with both the host-country counterpart and the advisor. In addition, the official reported that OTA is continually looking for additional ways to use the traction and impact scores. One example of how researchers have used PFM performance data is by developing analytical approaches to identify the determinants of strong PFM systems and elements of successful PFM reform efforts, which can help to control for important differences across countries. Our analysis of OTA performance data found several errors that OTA introduced when aggregating project-level data; these errors collectively raised questions about the reliability of the instrument used to aggregate OTA’s quantitative performance measures and, thus, suggested there may be limitations in its ability to provide insight into performance across OTA projects. OTA uses this instrument to calculate the annual performance averages which are compared against annual targets and provided to OMB and to Congress in annual reports. Our analysis of the spreadsheet containing these data, and limited spot checking of some underlying data for our three case-study countries, identified a number of errors, including those introduced when transcribing data to the spreadsheet. In one instance, OTA listed a performance score under the wrong country, which resulted in inaccurate information for two countries’ projects. OTA has not yet fully implemented its requirement to conduct independent evaluations—that is, evaluations conducted by someone other than the resident advisor—of its completed technical assistance. OTA guidance for project reporting and documentation includes requirements for end-of-tour reports when a resident advisor leaves and end-of-project reports when a technical assistance project is completed. The end-of-project report is a postproject evaluation whose purpose is to (1) compare accomplishments with the initial objectives of the project and (2) improve the planning and execution of future projects. A senior OTA official said that the requirement for the end-of-tour report was fully implemented and that OTA intended to fully implement the end-of-project report requirement by the end of 2012. According to the guidance, the evaluation must be conducted independently; the report is the responsibility of the associate director and can be delegated to a senior advisor, but it should not be prepared by an advisor who prepared an end-of-tour report. Both OECD guidance and USAID guidance also highlight the importance of independence in evaluation. In contrast to the guidance, OTA provided three end-of-project reports that were conducted by OTA staff who had been involved in providing some of the technical assistance being evaluated. While no comprehensive and fully independent end-of-project evaluation has been conducted, OTA officials described relevant insights from a postproject trip to one country that was undertaken by a senior OTA official in part to better understand the longer-term impact of OTA assistance. During the trip, that official identified some factors that significantly limited the impact of this project, notably a lack of counterparty commitment at the policy level to implement and sustain reforms in key areas identified in the terms of reference. As more donors, including the United States, seek to provide additional assistance through countries’ systems, the need for strong PFM systems and accountability in recipient countries become even more important to help lessen corruption and ensure countries effectively manage resources. Given USAID’s stated goal of obligating 30 percent of annual assistance through local systems, including partner country PFM systems, by 2015, it must ensure that those systems are functioning properly and transparently. Efforts to strengthen PFM systems can lower the risks that assistance delivered through country systems will be misspent, while also increasing the capacity of the recipient country to effectively manage its own resources. Given concerns about transparency and corruption in many aid-recipient countries, achieving the expected benefits of using recipient countries’ PFM systems may be difficult without concerted efforts by the donors and countries to strengthen these systems. In addition, the risks to using these systems are increased without efforts to strengthen them. To help strengthen these systems, USAID provides developing countries with PFM capacity- building assistance. In light of this assistance, and given the difficulties USAID has experienced in the past with implementing monitoring and evaluation practices, the importance of developing efforts to strengthen PFM systems and effectively monitoring and evaluating these efforts has increased significantly. Previous difficulties USAID has experienced with regard to monitoring, including poor quality of reported data on projects, and the lack of reliable baseline data, could affect USAID’s ability to conduct effective evaluations of these projects, as evaluators will not have access to reliable data. Also, while USAID’s new evaluation policy places greater emphasis on impact evaluations and experimental methods, it may be difficult for USAID to evaluate its PFM capacity- building efforts using these approaches. Finally, for USAID to achieve its target for use of local systems, by fiscal year 2015, it must be able to identify and measure the full extent of assistance that qualifies. Treasury OTA advisors provide technical assistance targeted at weaknesses in countries’ PFM systems. Although OTA has taken numerous steps to monitor this assistance, errors in its aggregated performance data and the lack of comprehensive postproject evaluations limit OTA’s ability to effectively evaluate its assistance. OTA has adapted a challenging mandate from OMB to create a useful measure of its efforts’ performance for management, advisors, and host-country counterparts; however, conceptual problems with and errors in the aggregated measures undermine the measures’ reliability. With greater confidence in the quality of the data, opportunities exist to better identify patterns in performance across OTA programs, such as economic or institutional factors that influence program performance. These patterns could help OTA better understand the strengths and weaknesses of its assistance programs and make appropriate changes. OTA could also use the quantitative performance measurement system it has developed to experiment and document the results of new approaches. Finally, although OTA guidance recognizes the importance of postproject evaluations by including in its guidance the requirements for an end-of- project evaluation, the agency has yet to enforce this requirement. Without a postproject evaluation OTA may not fully understand results of its technical assistance or be able to apply lessons learned to new projects. To monitor progress toward USAID’s target to obligate 30 percent of its annual assistance through local systems by 2015, we recommend that the Administrator of USAID direct the appropriate offices to develop a process to reliably identify and track the agency’s use of local systems in all countries receiving assistance. To help ensure that USAID conducts effective evaluations of PFM-related programs under its new policy, we recommend that the Administrator of USAID direct the appropriate offices to ensure that they are establishing adequate monitoring practices for PFM-related programs. Such practices may include selecting proper indicators, collecting reliable baseline data, and ensuring the reliability of reported results. To improve the effectiveness of OTA’s technical assistance, we recommend that the Secretary of the Treasury direct OTA to take the following two actions to improve monitoring and evaluation: implement additional controls to improve the process for computing OTA-wide annual performance measures, and fully implement OTA’s existing requirement for end-of-project evaluations and, consistent with its existing guidance, have an independent party conduct the evaluations. We provided a draft of this report to USAID, Treasury, and State for their review and comment. Both USAID and Treasury concurred with our recommendations in their written comments, which are reproduced in appendix V and appendix VI respectively. These agencies along with State provided technical comments and updated information, which we have incorporated throughout this report, as appropriate. In concurring with our two recommendations, USAID reported that it is in the process of refining definitions that will identify and help measure the assistance that qualifies to meet the agency’s target of obligating 30 percent of its annual assistance through local systems by 2015. USAID also reported that it has implemented two complementary reforms that will help ensure effective evaluations and adequate monitoring of its PFM assistance. The first reform involves USAID planning for the monitoring and evaluation of assistance during the early stages of project design, including defining indicators and collecting baseline data. The second reform requires USAID to plan final monitoring and evaluation schedules during project design. In concurring with our two recommendations, Treasury reported that OTA has corrected several errors in the 2011 annual performance measures, and has taken steps to strengthen data controls, including conducting additional reviews and increasing staff resources dedicated to computing the performance measures. In addition, OTA has begun to implement its requirement for independent end-of-project evaluations of its technical assistance and intends to fully implement the requirement by the end of 2012. We are sending copies of this report to appropriate congressional committees. We are also sending copies of this report to the Administrator of USAID and the Secretaries of the Treasury and State. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this reported are listed in appendix VII. To examine the process the U.S. Agency for International Development (USAID) uses to develop programs to strengthen public financial management (PFM) systems, we reviewed USAID’s official policy and procedures, as contained in the Automated Directives System, as well as USAID’s new guidance documents for developing country strategies and designing projects, including the Country Development Cooperation Strategy and Project Design Guidance. We also interviewed USAID officials, including officials from the democracy and governance (DG) and economic growth (EG) offices. Most USAID country offices are required to have an approved country strategy by the end of fiscal year 2013 and to follow the new project design guidance starting July 2012. Because USAID was still in the process of transitioning to these new processes during the course of our audit, we did not review USAID’s implementation of these processes. To examine the process the U.S. Department of Treasury’s Office of Technical Assistance (OTA) uses to develop programs to strengthen PFM systems, we interviewed senior OTA officials regarding OTA’s policies and procedures. We interviewed resident advisors regarding how OTA assessed and developed projects in our three OTA case study countries. We also reviewed OTA’s official policy guidance document and project- specific documents, including assessment reports, signed terms of reference, work plans, and monthly reports. To examine and assess the processes USAID uses to monitor and evaluate its PFM-related programs, we reviewed USAID program development guidance; monitoring and evaluation policies and procedures; USAID reports; and program documents, including assistance agreements, monitoring and evaluation plans, and progress reports; USAID external project evaluations; past GAO reports; and USAID Inspector General reports. We used the review of the documents for our three USAID country case studies to illustrate the implementation of USAID’s monitoring processes for its PFM-related programs. We also interviewed agency officials in Washington, D.C., and conducted telephone interviews and had e-mail communications with key country- office staff for each of our case study countries. To examine and assess OTA’s processes for monitoring and evaluating PFM-related programs, we reviewed OTA project reporting and documentation instructions, project work plans, and monthly reports, end- of-tour and end-of-project reports, annual quantitative performance data, Organization for Economic Cooperation and Development guidance on evaluation, and conducted interviews with OTA headquarters staff. We used the review of the documents for our three OTA case studies to illustrate the implementation of OTA’s monitoring processes. We supplemented the document review with interviews with current or former advisors for each of the case study countries. In addition, we assessed the reliability of the instrument OTA uses to aggregate quantitative performance measures across projects. We examined spreadsheets provided by OTA for consistency, examined data for outliers and missing values, and spot-checked the transcription of data to the spreadsheet for our case study countries. Due to a number of errors in the OTA data, we could not determine if the aggregated performance data were sufficiently reliable for identifying patterns in performance across projects or over time. In selecting our case study countries, we focused on countries in which OTA or USAID had relevant ongoing or recently completed projects designed to strengthen PFM systems. We selected six countries: Cambodia, Honduras, and South Africa for OTA, and Kosovo, Liberia, and Peru for USAID. In selecting these countries, we considered the following two factors: Geographic diversity: For each agency, we selected countries from three different geographical regions. Country income group diversity: For each agency, we chose a country that the World Bank has listed as (1) lower income, (2) lower-middle income, and (3) upper-middle income in order to report examples from different income levels, which may also be associated with different institutional characteristics. In cases where more than one country would be acceptable under our decision criteria, we considered additional criteria, such as the availability of other broad-based indicators. For OTA, we focused our selection on countries receiving technical assistance from OTA’s Budget and Financial Accountability team, given its focus on traditional PFM aspects. For USAID, we selected our countries from a list of countries with significant PFM-related programs that USAID provided. USAID excluded some countries with PFM-related programs from the list because staff were not available to discuss their programs with us. Our review of USAID and OTA case study countries was not intended to be comprehensive or applicable to all their respective programs and projects or generalizable to all countries. To describe recent trends in country PFM systems, we reviewed the data and publications of five international organizations that conduct broad assessments of country PFM systems. These broad assessment tools include the Public Expenditure and Financial Accountability Program, the World Bank’s Country Policy and Institutional Assessment’s quality of budgetary and financial management indicator, International Budget Partnership’s Open Budget Survey, Transparency International’s Corruption Perceptions Index, and the International Monetary Fund’s fiscal transparency Reports on the Observance of Standards and Codes. We converted our six case study country scores for the three PFM diagnostic tools for which scores were available (Open Budget Survey, Corruption Perceptions Index, and quality of budget indicator) into percentile rankings to illustrate each country’s performance as measured by the three PFM diagnostic tools. We also interviewed officials at the World Bank and International Monetary Fund to discuss their PFM-related diagnostic tools. To describe the Department of State’s PFM-related efforts, we conducted interviews with agency officials in Washington, D.C. We reviewed State documents, including agency guidance, waiver packages, and program documents. We also reviewed relevant appropriations laws to identify the requirements for State’s fiscal transparency reviews. We conducted this performance audit from October 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Five international organizations — the Public Expenditure and Financial Accountability Program, the World Bank, International Budget Partnership, Transparency International, and the International Monetary fund — have developed tools to assess various aspects of countries’ public financial management (PFM) systems, and some have published recent findings or results. We illustrate the percentile ranking for each of our case study countries in three broad PFM diagnostic tools – the Open Budget Survey, the Corruption Perceptions Index, and the World Bank’s quality of budgetary and financial management indicator. In December 2001, a multiagency partnership founded the Public Expenditure and Financial Accountability Program (PEFA) to strengthen the ability of aid recipients and donors to assess and improve country public expenditure, procurement, and financial accountability systems. In 2005, the program developed the PFM Performance Measurement Framework, known as the PEFA Framework, to provide a measure of the strengths and weaknesses of a country’s PFM system. The PEFA Framework incorporates a PFM performance report, which includes an assessment of the evaluated country’s PFM performance along six core dimensions of PFM. The six dimensions are the following: Credibility of the budget: The budget is realistic and is implemented as intended. Comprehensiveness and transparency: The budget and the fiscal risk oversight are comprehensive, and fiscal and budget information is accessible to the public. Policy-based budgeting: The budget is prepared with due regard to government policy. Predictability and control in budget execution: The budget is implemented in an orderly and predictable manner, and arrangements for the exercise of control and stewardship in the use of public funds exist. Accounting, recording, and reporting: Adequate records and information are produced, maintained, and disseminated to meet decision-making control, management, and reporting purposes. External scrutiny and audit: Arrangements for scrutiny of public finances and follow up by executive are operating. The six core dimensions include 28 high-level indicators, each of which is assigned a letter score from A to D. The initial assessment helps establish performance baselines, while repeat assessments help in monitoring performance progress over time. In 2011, the PEFA secretariat released the 2010 PEFA Monitoring Report. Based on a comparison of 33 repeat PEFA assessments over the 2005-2010 period, the analysis shows that more countries had a higher number of highest or improved scores (23 countries) than lowest or worsened scores (8 countries), indicating a broad trend of PFM improvement across the countries surveyed. According to the analysis, while PFM systems are improving overall, systems features vary significantly. Formal PFM features where progress can be achieved through adopting a new law, regulation, or technical tool, or focusing on no more than a few agencies, or at an early stage in the budget cycle are more likely to improve or maintain a high score than functional PFM features where progress requires actually implementing a new law or regulation, or coordinating the work of many agencies, or working later in the budget cycle. Starting July 2012, the PEFA partners are extending the PEFA program by 5 years and conducting a comprehensive review of the PEFA Framework, the first since it was launched in 2005. One of the objectives is to improve confidence in PEFA assessments through an endorsement process that provides an incentive to ensure adherence to PEFA good practices in undertaking an assessment. The World Bank undertakes annual Country Policy and Institutional Assessments to assess the quality of a country’s present policy and institutional framework. One of the 16 indicators the World Bank uses to assess the performance of a country’s current policies and institutions is the quality of budgetary and financial management. This criterion assesses the extent to which the country has a comprehensive and credible budget linked to policy priorities; effective financial management systems to ensure that the budget is implemented as intended in a controlled and predictable way; and timely and accurate accounting and fiscal reporting. Over the 5-year period from 2005 through 2010, 26 out of 73 countries, or slightly more than one-third, showed improvements in the quality of their PFM systems, while 19 countries’ scores worsened. Most countries, 62 percent in 2010, are clustered in the mid-range. In addition to the quality of budgetary and financial management indicator, the World Bank uses other broad PFM diagnostic tools, including the Country Financial Accountability Assessments and the Public Expenditure Reviews. The objective of the Country Financial Accountability Assessments is to support the World Bank’s development objectives by identifying strengths and weaknesses in country PFM systems. The assessments are intended to help identify priorities and inform the design and implementation of capacity-building programs. The assessments also describe and analyze financial management and expenditure controls, including expenditure monitoring, accounting and financial reporting, internal controls, internal and external auditing, and legislative review. Information obtained from the assessments, taken together with that obtained from other World Bank diagnostic products and other sources, supports the preparation of an integrated fiduciary assessment. The results of these assessments inform the preparation of the World Bank’s Country Assistance Strategy, particularly the sections dealing with the size of the support program, the sectors to be supported, selection of lending instruments, and approaches to risk management. The assessments are particularly important where World Bank resources are managed by the country’s own PFM system, as in the case of budget support. The Public Expenditure Review’s objectives are to strengthen budget analysis and processes to achieve a better focus on growth and poverty reduction, and to assess public expenditure policies and programs to provide governments with an external review of their policies. Public Expenditure Reviews may also address the incentives and institutions needed to improve the efficacy of public spending in major sectors such as health and education. The International Budget Partnership’s Open Budget Survey assesses the availability in each country of eight key budget documents, as well as the comprehensiveness of the data contained in these documents. The survey also examines the extent of effective oversight provided by legislatures and supreme audit institutions, as well as the opportunities available to the public to participate in national budget decision-making processes. The State Department (State) does not directly fund public financial management (PFM) programs, but its Office of Monetary Affairs (OMA) is responsible for two PFM-related activities. First, since 2008, U.S. appropriations laws have required State to evaluate the fiscal transparency of foreign governments receiving U.S. assistance. When State determines it is important to the United States’ national interest, it may approve waivers for countries that are deemed to be nontransparent that allow U.S. agencies to provide assistance to these countries in accordance with the legal prohibition against providing such assistance without waivers. State processed 28 such waivers in 2011. Second, State supports the presidential initiative called Domestic Finance for Development (DF4D), which President Obama announced in 2011. Under this initiative, State is to help countries use their own resources and leverage other donor resources to meet development goals. State is piloting this initiative in five countries: El Salvador, Honduras, Kyrgyzstan, Tunisia, and Zambia. The goal of DF4D is to strengthen the commitment to reform within partner countries; provide technical assistance in partner countries, such as taxation expertise, including through innovative public- private partnerships; and elevate the importance and interrelation of domestic resource mobilization, fiscal transparency, and anticorruption efforts in public finance as key components for sustainable economic development. Because State has not funded programs in the past for this initiative, State has reached out to other organizations operating in these countries, including international and bilateral organizations, to leverage their programs and resources. State has developed processes for carrying out its PFM-related activities required by legislative mandate and presidential initiative. Under the Fiscal Transparency Review Process (FTRP), OMA reviews central governments expected to receive bilateral economic assistance and international security assistance for several dimensions of transparency: publicly disclosed budget, budget breakdown by ministry, standards for awarding natural resource contracts, and timely and accurate documents. Each year, OMA reviews all countries it deemed nontransparent in the prior year, and countries where recent events may have affected fiscal transparency to evaluate whether they meet the threshold of fiscal transparency. State uses the IMF’s definition of fiscal transparency as a guideline for the FTRP. During this process, OMA officials said they obtain information about the level of transparency of each country by collecting information from U.S. embassy staff working in those countries. In addition, OMA staff review publicly available reports published by international organizations and civil society representatives, such as the World Bank, International Monetary Fund, International Budget Partnership, and the World Economic Forum. Because these organizations do not always prepare country reports on an annual basis, officials told us they use these reports as a check on their internally generated information, rather than relying on them as primary sources of information. For countries that State finds to be nontransparent, State can issue a waiver that allows the country to receive otherwise restricted assistance. As part of the process of requesting a waiver, embassy staff in country develop action plans to assist the country in improving the level of transparency in its budget process. The action plan should address specific fiscal transparency issues identified in the transparency review process and should include recommendations of short- and long-term steps that the countries can take to improve budget transparency. Embassies work with host governments to encourage implementation of action plans, which may include activities such as continued daily engagement with country officials working on budget reforms, providing training and coordinating training on issues related to the budget process, and funding local nongovernmental organizations to perform budget oversight functions. For OMA’s DF4D initiative, State officials, working with other agencies and organizations, helped develop programs based on needs expressed by country government officials that built on existing reform efforts and development priorities. As an interagency effort, OMA helps identify and leverage existing programs and resources, such as those of multilateral organizations and other donors, such as USAID’s Innovation Fund and the Global Health Initiative. For example, USAID is implementing a $7.6 million program to advance El Salvador’s fiscal reform agenda by building capacity and improving systems for public expenditure and management and tax revenue mobilization, promoting private sector engagement, and creating a $2 million Revenue Challenge Fund to support improved tax collection at the municipal level. With five countries piloting DF4D, State plans to proceed by selecting additional partner countries based on performance against quantitative DF4D-related measures, consultations with posts, and expressions of interest from ministers of finance and other economic leaders. Moreover, State plans to use action plans developed for FTRP for countries to be considered for DF4D. State encourages posts to report on PFM issues and opportunities to mobilize domestic resources, raise the issues with relevant stakeholders in the public and private sectors, and work with OMA staff to further the objectives of the DF4D initiative. State is beginning to conduct additional monitoring as part of its fiscal transparency evaluations. Starting in 2012, State is requiring benchmarks in its action plans for nontransparent countries so that it can compare progress annually. According to State officials, some country action plans had benchmarks but the quality of the benchmarks varied. The action plan attempts to capture all the steps necessary to improve a country’s fiscal transparency and includes more than just State actions. Lastly, appropriations law for 2012 requires State to evaluate whether or not the country has made progress toward improved fiscal transparency included in any waiver request submitted. State guidance reports that progress toward implementation of embassy action plans will factor into its decision of whether to renew waivers. To provide illustrative examples of U.S. projects to strengthen public financial management (PFM) systems, we chose six case study countries. For the U.S. Agency for International Development (USAID), we selected projects in Kosovo, Liberia, and Peru and for the U.S. Department of the Treasury (Treasury), we selected projects in Cambodia, Honduras, and South Africa. We used the following criteria to select the three countries for each agency: The agency has relevant ongoing or recently completed projects focused on strengthening PFM systems. The countries represent different geographic regions. The countries have different income levels, which may be associated with different levels of government capabilities. For each U.S. agency, we selected a country that the World Bank has classified as lower income (Cambodia and Liberia), lower-middle income (Honduras and Kosovo), and upper-middle income (Peru and South Africa). For one PFM-related project in each country we summarized the project’s background, and, for selected objectives, we summarized some of USAID’s expected results and some of Treasury’s activities. These examples are meant to be illustrative and not generalizable. Tables 3 through 5 summarize selected USAID PFM-related projects in Kosovo, Liberia, and Peru. Kosovo Growth and Fiscal Stability Initiative Time period: July 2010-July 2013 Award amount: $1,051,208 Description: Since 1999, USAID’s Kosovo economic policy and institutional strengthening programs have focused on establishing key central economic institutions and an enabling environment for private sector growth. USAID is to adjust the focus of technical assistance as the Growth and Fiscal Stability Initiative builds upon the experience and lessons learned from the creation of reliable financial institutions in the central government to address the fiscal stewardship challenges faced by subnational governments. The initiative is to work with municipalities in areas that are directly linked with the Ministry of Finance and Economy, such as budget, treasury, property tax, and public private partnerships. Liberia: Governance and Economic Management Support Program (GEMS) Time period: July 2011-June 2016 Contract amount: $44,902,679 Description: USAID-GEMS is to address significant governance challenges remaining after USAID completed its previous capacity- building program in 2010. The program is to strengthen human and institutional capacity within selected ministries, agencies, and commissions. USAID-GEMS is to develop and maintain systems that increase transparency and accountability, increase efficiency, reduce expenditures, increase revenue, and limit corruption. Peru: USAID ProDecentralization 2 Time period: August 2008-July 2012 Award amount: $10,644,432 Description: The USAID/Peru ProDecentralization project is in the second phase. The first phase targeted national, regional, and municipal institutions responsible for implementing the decentralization process. The second phase seeks to continue to improve the policy framework at the national level and strengthen the institutional capabilities of regional and municipal governments. The national project aims to improve the legal and policy framework for decentralization, including fiscal policies that support a more equitable distribution of public resources. Regional and local-level activities are to target technical assistance and training to the diverse needs of Peru’s regional and municipal governments in effectively administering resources and responding to citizens’ increasing expectations. Under this project, existing services offered to subnational governments in planning, budgeting, accountability, and institutional strengthening are supplemented by new training and technical assistance programs. Tables 6 through 8 summarize selected Treasury Office of Technology Assessment (OTA) PFM-related projects in Cambodia, Honduras, and South Africa. Cambodia: Technical Advisory Services on Budgeting Time period: October 2009–September 2010 Description: The Cambodian government has been implementing a plan to create a credible budget and improve accountability. In the current phase of the plan, OTA and Cambodian government actions are focused on decentralizing the reform effort to the line ministries and subnational levels. These actions include expanding the use of strategic and program budgeting, implementing a PFM information system, improving macroeconomic forecasting, enhancing the linkage between the capital and operating budgets, and improving overall financial accountability. The overall aim is to improve the ability of the budget to be an instrument for policy delivery and to support effective and efficient service delivery. Honduras: Technical Advisory Services on Budgeting Time Period: January 2011-December 2013 Description: OTA’s technical assistance focuses on implementing international public sector accounting standards. South Africa: Technical Advisory Services on Budgeting Time Period: February 2007—January 2009 Description: Treasury OTA has conducted technical assistance projects in South Africa in the areas of budget formulation, intergovernmental finance, infrastructure budgeting, public finance training, and others since 1997. In 2006, the National Treasury of South Africa requested a new OTA budget project to focus on the organization of an expenditure program performance evaluation unit, performance information and cost analysis, and support for the Collaborative Africa Budget Reform Initiative. The central mission of the resident advisor was to establish an expenditure program performance review system. Cheryl Goodman (Assistant Director), Michael Maslowski, Shirley Min, RG Steinman, Michael Hoffman, Debbie Chung, Grace Lui, David Dayton, and Etana Finkler made key contributions to this report.
|
Effective management of public resources can play an important role in a countrys development. In recent years, developing countries committed to strengthen their PFM systems and donors committed to use those systems as much as possible. The United States provides assistance to strengthen PFM systems primarily through USAID and Treasury. USAID conducts capacity building activities to strengthen PFM systems as part of its development programs and has also set a target to obligate 30 percent of its annual assistance through local systems by 2015. Treasury provides technical assistance through advisors who work in country, typically with the finance ministry. GAO was asked to examine the processes U.S. agencies use to (1) develop programs to strengthen PFM systems and (2) monitor and evaluate those programs. GAO reviewed agency guidance and program documents, interviewed U.S. agency officials, and selected case studies to serve as illustrative examples of PFM-related programs. To develop programs to strengthen developing countries Public Financial Management (PFM) systems, the U.S. Agency for International Development (USAID) and the U.S. Department of the Treasury (Treasury) rely on assessments of the host country governments systems. In 2011, USAID implemented new processes that place a greater emphasis on PFM in its development efforts as the agency aims to increase its use of country systems to deliver assistance. The agency traditionally included PFM capacity-building efforts only as components of broader programs, as it identified relevant weaknesses during the country assessment or program design process. USAIDs new strategy and program development processes include a mandatory assessment of a countrys institutional capacity, including its financial systems, and a requirement to consider the use of country systems to deliver assistance. Most USAID country offices are required to develop a strategy using the new guidance by the end of fiscal year 2013. Treasurys process for developing programs begins with an initial assessment of the host countrys capabilities. Treasury staff then draft objectives for the program. For example, a Treasury program in Honduras set four objectives, including improving operational efficiency and enhancing accountability by strengthening the organization of the ministry of finance. Once in country, the advisor develops an annual workplan, outlining more specific goals aimed at meeting the overall objectives. USAID and Treasury use several processes to monitor and evaluate their PFM assistance, but weaknesses exist. USAID uses its regular procedures, which may include performance management plans, periodic progress reporting, site visits, and evaluations, to monitor and evaluate its PFM-related programs. Prior reports by USAIDs Inspector General and GAO have found weaknesses in USAIDs implementation of its monitoring procedures in other programs, including programs from the USAID offices that provide PFM assistance. In addition, USAID is currently unable to monitor overall progress toward its target to obligate 30 percent of its program funds through local systems by 2015. USAID, and GAO in prior reports, have identified a number of weaknesses in evaluation practice. To address weaknesses the agency had identified, USAID adopted a new evaluation policy in January 2011 that states that all large projects are required to have an external evaluation, 3 percent of program budgets should be devoted to external evaluation, and evaluations must use methods that generate the highest quality evidence. Treasurys processes for monitoring and evaluating its programs include monthly reports, annual quantitative performance measures, voluntary customer feedback surveys, and on-site management reviews, but Treasury does not fully evaluate the performance of its completed technical assistance programs. In addition, Treasurys quantitative performance measures have been a useful project-level indicator of performance but have not been a useful indicator of overall performance due in part to inherent challenges associated with summarizing program performance and errors introduced when aggregating the performance data. Furthermore, a senior Treasury official reported that Treasury had not yet fully implemented a requirement to conduct independent postproject evaluations of its technical assistance programs. USAID should improve its capacity to measure its use of local systems and ensure adequate monitoring of its PFM programs. Treasury should implement additional controls to improve the process for computing program-wide annual performance measures and fully implement its requirement to evaluate the impact of its completed assistance. USAID and Treasury both concurred with GAOs recommendations.
|
The E-10A program comprises three primary elements: the aircraft, radar, and battle management command and control subsystem. The aircraft is a Boeing 767-400ER, the largest 767 variant Boeing makes. The Air Force has only contracted for one aircraft to date, because a final decision on the operational platform has not been made. This aircraft is a commercial product that will be modified for military use and used as a testbed. At this time there is only 1 unfilled order for the 400 model in the Boeing assembly line and 25 other unfilled orders for other smaller 767 models. If the Boeing production line were to close down before the Air Force is positioned to make a production decision on the E-10A it would have to find an alternative. Alternatives could include a different aircraft type or model or the purchase of 767-400ER aircraft from commercial airline companies. The radar planned for the E-10A began development in 1997 as a response to the growing concern about cruise missile proliferation. Initially, it was intended to upgrade the radar on the Joint Surveillance Target Attack Radar System (Joint STARS). The upgraded radar was to have advanced sensor technology, providing air-to-air capability for cruise missile defense and significant increases in ground surveillance capability. Shortly after the program began development, the Air Force restructured the program to develop a modular, scalable radar suitable for use on a variety of airborne platforms. OSD approved the development of the multiple platform radar in 2003. It is being designed for inclusion on the Global Hawk and E-10A programs. The Air Force began evaluating the need to improve its airborne battle management command and control capabilities in 2002. The planned E-10A battle management command and control subsystem is software intensive and intended to enable the E-10A to process and display sensor data from the radar and eventually from off board sensors so that the onboard crew can take actions against time sensitive targets. The Air Force issued a contract in September 2004 to begin preliminary design efforts for this subsystem. We have a body of work focused on best practices in product development and weapon systems acquisition. This work has found that key to success is formulation of a business case that matches product requirements to available resources—proven technologies, sufficient engineering capabilities, time, and funding. Several basic factors are critical to establishing a sound business case for undertaking a new product development. First, the needs of the party seeking the new product, the user, must be accurately defined, alternative approaches to satisfying these needs properly analyzed, and quantities needed for the chosen system must be well understood. The developed product must be producible at a cost that matches the users’ expectations and budgetary resources. Finally, the developer must have the resources to design and deliver the product with the features that the customer wants when it is needed. If the financial, material and intellectual resources to develop the product properly are not available, development does not go forward. Additionally, an evolutionary and knowledge-based acquisition strategy that captures critical knowledge before key decision points in the program is needed to execute the business plan. This calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and invites failure. Ultimately, preserving the business case and attaining critical knowledge in time for decisions strengthens the ability of managers to say “no” to pressures to accept high risks or unknowns. If best practices are not followed, we have found a cascade of negative effects becomes magnified in the product development and production phases of an acquisition program. These have led to acquisition outcomes that included significant cost increases and schedule delays, poor product quality and reliability, and delays in getting the new capability to the warfigher. These outcomes have been demonstrated in other programs such as the F/A-22 fighter, C-17 airlifter, V-22 tiltrotor aircraft, PAC-3 missile, and others. Questions remain as the Air Force develops the E-10A program’s business case to support the decision to begin development in April 2005. The DOD has identified a need for a cruise missile defense capability and the Air Force has selected the E-10A to meet this need. There are, however, unanswered questions in both the requirement and resource elements of the E-10A business case. OSD is still studying whether the E-10A is the most cost-effective alternative for the cruise missile requirement and the extent of battle management command and control needed on board to satisfy the intended need. Finally, assessments of the technology maturity, estimated costs, and funding availability are still in process. OSD officials from the Program Analysis and Evaluation Directorate are not satisfied that the studies done by the Air Force to select the E-10A sufficiently analyzed alternative systems. As a result, they are reviewing alternative systems and attempting to determine the most cost-effective solution to satisfy the warfighter’s needs. OSD officials agree that the E- 10A could provide an increased capability in identifying and tracking ground moving and time-sensitive targets. However, they believe that if there are less costly systems that can provide similar capabilities, it could be more cost-effective to buy those systems. The Air Force began efforts in 1997 to develop a radar sensor that would detect cruise missiles as part of the Joint STARS program. The Air Force examined different size and power combinations for the radar and which platforms had the capacity to carry the radar and still perform multiple missions. These analyses assumed that only manned airborne platforms could meet these requirements. The Air Force completed a formal analysis of alternatives in February 2002 of different possible host platforms for the radar. The study indicated that other aircraft could meet many of the requirements but were based on older commercial technology that was less efficient to operate. The Air Force analysis concluded that the Boeing 767-400ER was the optimal choice given the future multi-mission purpose of the system, and the size, weight, and performance requirements of the radar. OSD officials are also uncertain about the degree of battle management command and control capability needed onboard the E-10A versus transmitting the information gathered by the E-10A to other command and control centers. According to the Air Force, the need for an onboard capability is driven by the large amounts of data that would be collected and analyzed, the limited bandwidth to transmit the data, and the need to have line-of-sight communications for time-sensitive targeting, particularly against cruise missiles. OSD officials said they are looking at whether the battle management subsystem has to be part of the E-10A platform to meet the timelines identified by the Air Force. They expect to present their results by March 2005. Air Force officials told us that some of the battle management functions are currently performed by ground units, but these ground units cannot adequately respond to real-time events involving moving targets like cruise missiles. The E-10A’s primary function will be battle management command and control of cruise missile detection and time-sensitive targeting activities. As a result, its battle management capabilities will be tailored to support those functions. These capabilities were validated in October 2004 by the Joint Requirements Oversight Council in preparation for the program’s upcoming Milestone B decision. To provide these capabilities, an onboard crew will be required. The current E-10A crew size is estimated at 27 staff—2 flight crew, 21 mission operators, and 4 technicians. According to the Air Force, the crew size could change depending on the mission and the degree of automation on the system. However, the Air Force has not performed any incremental analysis to show crew size for individual specific missions, such as doing cruise missile defense only. To date, the Air Force has not identified sufficient or available resources to meet the warfighter’s requirements and to start the development program. The Air Force program office has completed its assessments of E-10A critical technologies, cost estimates, and funding needs but these assessments are being reviewed by OSD. While some resources will meet the requirements, others are either unproven or in a state of flux. Radar development started under a separate program, the Radar Technology Insertion Program, and most radar technologies were reported as mature. Because the Air Force did not provide GAO its technical assessment of the battle management command and control system critical technologies, we consider the maturity levels unproven, even though program officials told us these technologies meet minimum maturity standards. In addition to technologies, the financial resources for the program are in a state of flux. The E-10A cost estimate for development and production is still a work in process and funding was recently reduced by $600 million for fiscal years 2006 and 2007, which according to DOD officials will substantially impact the program. Most radar technologies are at a high level of maturity, but evidence was not provided to support stated maturity levels of the battle management command and control subsystem. The Air Force assessed radar technologies prior to the October 2003 start of the Radar Technology Insertion Program. The critical technologies identified in the radar improvement program included the radar architecture, modes, receiver/exciter, and signal processor among others. Of the nine technologies identified, six were assessed as mature to our best practice standard; the remaining three were one level below the best practice requirement for mature technologies, a level DOD policy states is sufficient to begin development. These three technologies are the pulse compression unit, the structure, and the modes. Since the 2003 radar technology assessment, the radar improvement program completed its final design review in June 2004. Numerous tests have been conducted on small-scale radar prototypes to mitigate program risks. These tests electronically drove a signal through the radar, demonstrating the basic functionality of the design. However, the radar subsystem being designed for the E-10A has demonstrated neither form nor fit, nor has it been integrated on the aircraft platform. Although the integration process is an inherently high-risk endeavor, Air Force officials stated they have a process in place to manage these risks. The actual size of the E-10A’s radar will be significantly larger than the tested prototype and will require the E-10A testbed aircraft in order to complete the demonstration currently scheduled to occur in 2010. The process of scaling the radar to the appropriate size and ensuring that all the individual modules work together has yet to be accomplished. Recognizing this, program officials have identified the integration of the radar as a critical technology for the E-10A weapon system. The level of this technology’s maturity has not yet been finalized. OSD officials accepted the Air Force’s assessment of the radar technologies but expect more detailed information on the technologies when the E-10A weapon system undergoes its Milestone B review in April 2005. An assessment of the battle management command and control subsystem technologies was not provided for our review. This subsystem is complex and software intensive. E-10A program officials told us these technologies would meet the minimum DOD standard for starting a program. However, the Air Force only recently directed the contractor to begin systems engineering efforts to determine a preliminary design for this subsystem. Development of critical software needed to demonstrate the technologies has not started. The first increment of software is not scheduled to be delivered until January 2008. On other major weapon system development programs, we have found software development to be a substantial cause for delays in technology development, system deliveries, and increased costs. Therefore, even though program officials have stated technologies are sufficiently mature, we think stronger evidence will be needed to demonstrate their claim. The Air Force has completed its cost estimate for the total E-10A program and released it to OSD for review. The cost estimates for each of the three major program elements contain risk. The biggest area of cost uncertainty is the battle management command and control subsystem. It is a highly complex software-intensive system. A contract was issued in September 2004 for about $71 million to begin early design and engineering efforts to support a preliminary design review in late 2005. Until this initial design and engineering effort is completed, the program will not be able to establish high confidence in its estimated costs. In addition, the aircraft contract only calls for the delivery of one commercial 767-400ER for testing. To convert this aircraft to military use, there will be additional costs for installing communication antennas, a refueling receptacle, hull hardening, and FAA airworthiness certification. According to the Air Force, these costs have been factored into its latest program estimate. The initial cost estimate for the radar program, managed separately from the E-10A program, has grown. Prior to entering system development, OSD determined that projected costs were understated and directed the Air Force to increase its funding by $154 million. The Air Force acknowledges that funding for the E-10A program is also a major concern. Funding cuts have delayed its start. It has undergone two congressional budget reductions; the first cut in fiscal year 2003 ($343 million) required a significant program replanning effort. The second cut in fiscal year 2005 ($115 million) resulted in schedule delays for the planned test program, system integration lab, testbed aircraft delivery, and the E-10A’s first flight. The Air Force states these cuts have caused the planned initial operating capability date to slip 3 years to 2015. A third cut, recently proposed by OSD in December 2004, reduces the program’s budget request by $300 million in both fiscal year 2006 and 2007—a total reduction of $600 million. The program office is in the process of evaluating the impact of these reductions and officials indicated that because these represent a reduction of about 45 percent in each year, they will have a significant impact on the program if they are sustained. OSD officials indicated that efforts related to aircraft development and the delivery of the test aircraft will likely bear the bulk of the reductions. This will have an impact on planned program milestones. They said it was important to keep the radar program funded because it is developing the radar planned for the new Global Hawk unmanned aerial vehicle in addition to the E-10A. The E-10A acquisition strategy raises concern as key decisions are planned before critical product knowledge is available. For example, the strategy for developing the first E-10A increment does not allow for adequate integration or prototype demonstration to ensure the design is stable at the system critical design review. System integration allows program officials to measure the stability of a product’s design and its ability to meet established requirements. Both commercial companies and DOD recognize the attainment of this knowledge as being demonstrated by the completion of most engineering drawings and some demonstration of the system level capabilities in a prototype. A stable design that meets requirements should be achieved by critical design review, before system demonstration and initial manufacturing of production representative products begins. However, the Air Force does not expect to deliver the battle management command and control and radar subsystems to the integration laboratory until 2008 and 2009, after critical design review, scheduled in 2007. The transition of the battle management command and control and radar subsystems from the integration lab to the 767-400ER test airframe is not scheduled to begin until late-2009, nearly 2 years after the critical design review and only a few months prior to the program’s production commitment decision. As a result, critical knowledge about the basic performance of key subsystems integrated into an actual E-10A prototype will not occur until 2010 (see fig. 1). Additionally, the fully integrated E-10A prototype will not be available for testing prior to the scheduled decision to begin production. This strategy requires significant concurrency among the technology development, product development, and production phases and places decision makers at a disadvantage by not knowing if the E-10A can demonstrate it meets system performance and reliability requirements before transitioning into production. In fact, the results of operational testing are not scheduled to be available until four of the six planned E-10As are already in production in 2011, greatly increasing the risks of costly design changes and schedule delays later in the program (see fig. 2). Our past reviews have found this to be a high-risk acquisition approach. The Air Force is planning to use an incremental approach to achieve the E-10A’s full capability with each subsequent increment adding capability. Although an incremental approach can reduce risks, the failure to capture critical knowledge while developing the first increment will likely reduce the benefits of such an approach. As currently planned there will be four distinct E-10A increments. Program officials are planning to conduct major program decision reviews prior to beginning development and demonstration of each increment. This approach, if implemented as planned, will provide decision makers with an opportunity to review the program’s progress and risk before making further investment decisions thus reducing risk in the program. The first increment is expected to provide the users with many of the system’s basic required capabilities. Those capabilities include cruise missile defense and on-board command and control capability for processing, displaying, and communicating the data needed to address time-sensitive targets. Subsequent increments will enhance the system’s capabilities, moving them closer to objective levels by increasing the amount of data processing and analysis done by computers and decreasing the amount done by human analysts with computer assistance, thus shortening the time it takes to make decisions. However, if the first increment falters, the Air Force will likely spend increasing amounts of time and money to achieve this initial capability, thereby delaying subsequent increments. The current conditions surrounding the development of the E-10A business case portend the potential for poor outcomes if requirement, resource, and acquisition strategy deficiencies are not resolved before system development and demonstration begins. The decision to start a major weapon systems acquisition program for the E-10A requires an executable business case that demonstrates the E-10A is the best way to satisfy the gap in warfighter’s capability and that the concept can be developed and produced within existing resources. An evolutionary and knowledge-based acquisition strategy is needed to ensure this business case can be executed within planned goals. The Air Force and OSD are still determining if a sound business case exists. Questions still surrounding the business case include: Is the E-10A the most cost-effective alternative? How extensive of a battle management command and control capability is needed? Are technologies at a high level of maturity? Is there sufficient funding to develop and deliver the capability in time? The acquisition strategy also fails to capture critical design, manufacturing, and reliability data in time to make investment decisions for moving the program through the development program into production. The gaps in knowledge increase the likelihood that the Air Force will not be able to deliver on the cost, schedule, and performance goals in its business case. Because gaps exist in the information needed to make a sound business case to start a major acquisition program, we recommend that the Secretary of Defense ensure that the open business case questions are answered before a decision is made to start the E-10A program. Additionally, to ensure a greater likelihood of success, if the E-10A program is approved to begin, we recommend the Secretary direct the Air Force to revise the acquisition strategy to ensure sufficient time is included in the schedule to (1) integrate and demonstrate the design before moving past the critical design review and (2) test a production representative E-10A prototype before starting production. DOD provided us with written comments on a draft of this report. The comments appear in appendix II. DOD concurred with our recommendation that the Secretary ensure that the open business case questions are answered before a decision is made to start the E-10A program. DOD provided some information on the current status of these questions and implies that some of the business case questions had been answered. We believe that until the OSD/Program Analysis and Evaluation study is completed and final results are provided to OSD acquisition decision makers, the business case questions remain open. DOD partially concurred with our recommendation that the Secretary direct the Secretary of the Air Force to revise the E-10A acquisition strategy to ensure sufficient time is available to (1) integrate and demonstrate the design before moving past the critical design review and (2) test a production representative E-10A before starting production. Regarding (1), DOD stated that OSD policy does not require the integration and demonstration of a design before critical design review. We disagree. Section E1.1.14 of Department of Defense Directive 5000.1, The Defense Acquisition System, states that “PMs…shall reduce integration risk and demonstrate product design prior to the design readiness review.” DOD’s design readiness review is required to end the system integration phase of system development and demonstration. Additionally, DOD’s entrance criterion for the demonstration phase requires a demonstration of the integrated product in a prototype. Nonetheless, DOD stated that it is restructuring the program with the goal of demonstrating the radar and battle management technologies in a prototype before starting systems development and demonstration. This approach incorporates the knowledge-based approach inherent in commercial best practices and endorsed by DOD policy. In its comments, DOD acknowledges that this approach will increase confidence in the program’s cost estimate and allow time to evaluate the aircraft platform. Regarding (2), DOD stated that the Milestone C production decision for low rate initial production decision will be based on the initial test results from a representation E-10A aircraft system. While the program schedule in effect at the time of our review did not indicate this, we believe this approach is more consistent with a knowledge-based acquisition strategy. By testing a production representative aircraft prior to committing to production, DOD will be able to reduce program risks and make informed decisions based on actual system capabilities and performance information. DOD also provided technical comments to our report. We made changes where appropriate but many of these comments were based on a new acquisition strategy that plans to delay the E-10A program Milestone B decision until 2010. We did not make DOD’s recommended changes to the report that reflected this new schedule because it has not been approved and we have not had the opportunity to review it. We are sending copies of this report to the Secretary of Defense and the Secretaries of the Air Force, the Army, and the Navy. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Website at http://www.gao.gov. Please contact me at (202) 512-4841 if you have any questions concerning this report. Other key contributors to this report were Martin Campbell, Michael Hazard, Travis Masters, Rae Ann Sapp, David Schilling and John Krump. During our review we discussed the E-10A program with officials from the following organizations in the Office of the Secretary of Defense, Undersecretary of Defense for Acquisition Technology and Logistics; the Director, Defense Systems/Developmental Test and Evaluation; the Director, Operational Test and Evaluation; the Director, Defense Research and Engineering; and the Director, Program Analysis and Evaluation. We also discussed the E-10A with the technical director of the Joint Theater Air Missile Defense Organization. In addition, we discussed the program with officials from several organizations in the Air Force. These officials included representatives from the Information Dominance Directorate with the Office of the Assistant Secretary for Acquisition; the Directorate of Operational Requirements; the Command Control Communications Intelligence and Reconnaissance Center at Langley Air Force Base; the Electronic Systems Center at Hanscom Air Force Base; and the Aeronautical Systems Center at Wright Patterson Air Force Base. To determine the progress the Air Force had made in developing the business case for the E-10A, we obtained available information on the system’s requirements and resources. However, the information we received on resources such as technology maturity, cost, funding, quantities, and schedule was limited. We discussed this information with knowledgeable program office and oversight officials. We also contacted officials studying force structure issues that could impact the requirements for the E-10A program. To assess the validity of the proposed business case, we compared the E-10A information with best commercial practices and DOD policy guidance for new development programs. Because the E-10A program has not yet been approved to enter system development and demonstration, specific information on the system’s technology readiness assessment and total program cost and funding were not available. As a result, we could not conduct a detailed assessment of these elements of the business case. However, because of other related information, such as the status of the software intensive battle management command and control subsystem, the significant reduction in funding for fiscal years 2006 and 2007, and the ongoing studies to answer OSD concerns, we were able to conclude that at the time of our review key business case elements were still not mature enough to begin product development. For example, complex and software intensive subsystems in other programs have caused major problems that have delayed achieving technology maturity and the Air Force has only recently directed the contractor to begin early systems engineering effort to determine a preliminary design for the E-10A battle management subsystem. Additionally, the $600 million reduction in funding planned for the first 2 years will almost certainly require the program to extend its planned schedule resulting in additional costs and funding requirements not yet estimated. These are business case elements that need to be firmly established before entering the upcoming Milestone B decision point. To determine the soundness of the E-10A’s acquisition strategy, we obtained available information on the program’s original and revised acquisition plans from the program office and discussed it with functional oversight and program officials. In addition, we compared the E-10A’s planned strategy to best commercial practices and DOD’s knowledge- based acquisition policy. However, since our analysis, the program’s budget request was reduced by a total of $600 million in fiscal years 2006 and 2007. We conducted our review from January 2004 to January 2005 in accordance with generally accepted government auditing standards.
|
The Air Force is on the verge of making a major commitment to the multi-billion dollar E-10A Multi-sensor Command and Control Aircraft program. Due to the substantial investment needed and technological challenges in developing the aircraft, the Subcommittee on Tactical Air and Land Forces asked GAO to examine the soundness of the E-10A business case as well as the risks associated with the current acquisition strategy. As the E-10A Multi-sensor Command and Control Aircraft program nears its official starting point, questions remain regarding critical elements of its business case, including the need for the aircraft, the maturity level of its technology, and its funding. Plans call for the E-10A to couple a new radar system with a sophisticated and software intensive battle management command and control system aboard a Boeing 767. E-10A is planned to fill a current gap in U.S. capabilities and provide a defense against weapons such as cruise missiles. The Office of the Secretary of Defense is still working on a study to determine whether the E-10A program is the most cost-effective way to fill that gap. E-10A program funding plans changed dramatically in December 2004 when the DOD proposed reducing the total program budget by about 45 percent for the next 2 fiscal years. The business case for starting a development program requires demonstrated evidence that (1) the warfighter need exists and that it can best be met with the chosen concept and (2) the concept can be developed and produced within existing resources--including design knowledge, demonstrated technologies, adequate funding, and adequate time to deliver the product. E-10A requirements and resources are still in flux. GAO found risks associated with the current E-10A acquisition strategy that could lead to costly changes later in the program. The program is set to move into production before critical knowledge is acquired. For example, the first fully assembled E-10A, outfitted with its radar and battle management command and control systems, would not be delivered in time to complete testing before the decision is made to begin production. Testing and production are scheduled to start at the same time in 2010. Furthermore, four of six E-10As are scheduled to begin production before the results of testing are available. By not demonstrating that the system can perform as expected before entering production, the program increases the risk of changes and delays later in the program. This strategy requires significant concurrency among the technology development, product development, and production phases.
|
NIEs analyze issues of major importance and long-term interest to the United States and are the IC’s most authoritative projection of future developments in a particular subject area. NIEs are intended to help policymakers and military leaders think through critical issues by presenting the relevant key facts, judgments about the likely course of events in foreign countries, and the implications for the United States. In this regard, former Director of Central Intelligence (DCI) William Casey stated: “the highest duty of a Director of Central Intelligence is to produce solid and perceptive national intelligence estimates relevant to the issues with which the President and the National Security Council need to concern themselves.” NIEs are produced by the National Intelligence Council (NIC), an organization composed of 12 National Intelligence Officers who report directly to the DCI. To prepare an NIE, the NIC brings together analysts from all the intelligence agencies that have expertise on the issue under review.However, in the final analysis, an NIE is the DCI’s assessment with which the heads of the U.S. intelligence agencies concur, except as noted in the NIE’s text. Based on a synthesis of the published views of current and former senior intelligence officials, the reports of three independent commissions, and a Central Intelligence Agency (CIA) publication that addressed the issue of national intelligence estimating, an objective NIE should meet the following standards: quantify the certainty level of its key judgments by using percentages or “bettors’ odds,” where feasible, and avoid overstating the certainty of judgments; identify explicitly its assumptions and judgments; develop and explore “alternative futures:” less likely (but not impossible) scenarios that would dramatically change the estimate if they occurred; allow dissenting views on predictions or interpretations; and note explicitly what the IC does not know when the information gaps could have significant consequences for the issues under consideration. All or part of the three NIEs we reviewed addressed the nature of the current and future threat to the United States from foreign missiles. NIE 95-19 was specifically prepared by the IC to support decisions on missile defense systems for North America. In the United States, this issue is a critical one for the Congress and the administration as they debate the desirability and planned characteristics of a proposed multibillion dollar national missile defense system. Such a system would aim to protect the United States from limited ballistic missile attacks, whether accidental, unauthorized, or deliberate. Ballistic missiles are self-propelled missiles guided in the ascent of a high-arch trajectory and freely falling in the descent. If launched from any of the 18 countries analyzed in NIE 95-19 (except Cuba), such missiles would have to travel between 5,000 and 13,000 kilometers (3,100 to 8,100 miles) to reach North America, classifying them as intercontinental ballistic missiles (ICBM). The main judgment of NIE 95-19 was worded with clear (100 percent) certainty. We believe this level of certainty was overstated, based on the caveats and intelligence gaps noted in NIE 95-19. On the issue of certainty in judgments, in 1992 then-DCI Robert Gates opined: “While we strive for sharp and focused judgments for a clear assessment of likelihood, we must not dismiss alternatives or exaggerate our certainty under the guise of making the ‘tough calls.’ We are analysts, not umpires, and the game does not depend on our providing a single judgment.” The wording of NIE 95-19’s main judgment implies a 100-percent level of certainty that the predicted outcome will hold true during the next 15 years. However, the caveats and intelligence gaps noted in the NIE do not support this level of certainty. For example, at the beginning of NIE 95-19, the estimate notes “as with all projections of long-term developments, there are substantial uncertainties.” A 1993 NIE stated its view that substantial uncertainties cloud the IC’s ability to project developments, especially beyond 10 years. Finally, in NIE 95-19’s Intelligence Gaps section, it noted several shortcomings in the IC’s collection of information on foreign plans and capabilities. NIE 95-19 did not (1) quantify the certainty level of nearly all of its key judgments, (2) identify explicitly its critical assumptions, and (3) develop alternative futures. However, in accordance with standards for producing objective NIEs, NIE 95-19 acknowledged dissenting views from several agencies and also explicitly noted what information the IC does not know that bears upon the foreign missile threat. Given the important role NIEs play in the national security decision-making process, U.S. policymakers require, and expect, objective estimates. “The paramount value [in NIEs] is objectivity,” according to a former NIC Vice Chairman. Adds the CIA, “dedication to objectivity—tough-minded evaluation of information, description of sources, and explicit defense of judgments—provides credibility on uncertain and often controversial policy issues.” We believe that five standards, previously discussed, apply to an objective NIE. These standards were synthesized from our review of the published views of nine current or former senior intelligence officials, three independent commissions, and a CIA publication that addressed the issue of national intelligence estimating. We were unable to obtain the DCI’s current, official standards (if any exist) for the essential elements of an objective NIE, because the DCI refused to grant us access to the NIC. (See our Scope and Methodology section for more details on this scope impairment.) NIE 95-19 did not quantify the certainty level associated with its key judgments, by either using bettors’ odds or percentages. It used unquantified words or phrases such as “unlikely,” “likely,” “probably,” “normally,” “sometimes,” “some leakage,” and “feasible, but unlikely.” The CIA has told its analysts to be precise in conveying the levels of confidence they have in their conclusions because policymakers and others rely on these assessments as they define and defend U.S. interests. Different people can hear very different messages from the same words, especially about probabilities, and therefore good estimates should use quantitative measures of confidence, according to a former NIC Vice Chairman. For example, a “small but significant” chance could mean one chance in a hundred to one person; for another it may mean one chance in five. Similarly, a former NIC Chairman wrote that NIEs with only words such as “possibly” are not of much help to someone trying to make an important decision. Instead, where feasible, NIEs should use a percentage, a percentage range, or bettors’ odds to better serve policymakers—a controversial, but necessary, approach, according to this former official. Some intelligence judgments, such as estimating foreign economic developments well into the future, may not easily lend themselves to specifying a meaningful level of confidence, using numbers. NIE 93-17 quantified the certainty of one of its key judgments by estimating a “small but significant chance (10 to 30 percent)” that an event would occur. The certainty levels of its other key judgments were not quantified. NIE 93-19 did not quantify the certainty levels of any of its key judgments. NIE 95-19 did not explicitly identify its critical assumptions either by separately listing them in one place or by introducing them throughout the text with wording such as “we have assumed . . .” Critical assumptions, also known as “linchpin assumptions,” are defined by CIA as analysts’ debatable premises that hold the argument together and warrant the validity of judgments. Therefore, as previously mentioned, assumptions should be explicitly distinguished from other information, including judgments. Estimative judgments are to be defended by fully laying out the evidence and carefully explaining the analytic logic used, according to a former Deputy Director for Intelligence, CIA. Writing about NIEs, a former Vice Chairman of the NIC agreed. As a general rule, the more complex and controversial an issue, the more analytic effort is required to ensure that critical assumptions are precisely stated and well defended, according to the CIA. Good analysis will clearly identify its key assumptions so that policymakers are aware of the “foundations” of the estimate and can therefore judge for themselves the appropriateness of the assumptions and the desirability of initiating actions to hedge against a failure of one or more assumptions. From our reading of NIE 95-19, we identified what appear to be its implicit critical assumptions. Most of these assumptions first appear in the NIE’s Key Judgments section, leading the reader to believe that the IC considers these assumptions to be fact-based judgments. However, we did not find a body of evidence in NIE 95-19 that would allow us to consider these statements as judgments, rather than assumptions. NIE 95-19 had only one explicit assumption, which was not a critical one, concerning Iraq. Some of NIE 95-19’s implicit critical assumptions are listed below. Three other assumptions that we identified included classified information. The Missile Technology Control Regime (MTCR) will continue to significantly limit international transfers of missiles, components, and related technology, but some leakage of components and critical technologies will likely continue. No country with ICBMs will sell them. Three countries—all of which were assessed as being “high” in both technical ability and economic resources—will not be interested in developing an ICBM that could reach the United States (and elsewhere). A flight test program lasting about 5 years is essential to the development of an ICBM. An attack against the United States from off-shore ships using cruise missiles, while feasible, is unlikely to occur . . . In addition, NIE 95-19 did not specify its assumption about the payload weight or weights the IC used in forecasting the range for North Korea’s Taepo Dong 2 ballistic missile. Publicly, the NIC’s Chairman has stated that the Taepo Dong 2 missile could have a range sufficient to reach Alaska, some U.S. territories in the Pacific, and the far western portion of the 2,000 km-long Hawaiian Island chain. NIE 95-19 did, however, specify payload weights for the Taepo Dong 1 missile. NIE 93-19 explicitly analyzed the effects of changes in payload weight on the estimated range of ballistic missiles. The payload weight directly affects the range of a missile—that is, a lighter payload allows any given missile to travel farther. For example, the IC judges that a certain country could increase the range of its existing intermediate range ballistic missile by 90 percent, if it decreased its payload weight by 70 percent. Like NIE 95-19, the 1993 NIEs did not explicitly identify their critical assumptions, as a rule. However, in one case, the text of NIE 93-17 prefaced its judgment with a clear assumption about the current nuclear practices in one country. NIE 95-19 did not develop alternative futures: less likely (but not impossible) scenarios that would dramatically change the estimate if they occurred. NIEs should “describe the range of possible outcomes, including relatively unlikely ones that could have major impact on American interests, and indicate which outcomes they think are most likely and why . . . The job, after all, is not so much to predict the future as to help policymakers think about the future,” according to a former NIC Chairman. The CIA, then-DCI Robert Gates, and other senior NIC officials agree that NIEs should analyze alternative futures. A senior intelligence official told us that an alternative future takes a fundamental analytic assumption and varies it to explore different potential outcomes; for example, “What if countries do not honor the MTCR?” Both 1993 NIEs explored alternative futures. NIE 93-19 mentioned them in the NIE’s text and explored them in detail in a separate annex. NIE 93-17’s Key Judgments included alternative futures, which were further developed through detailed scenarios. These alternative futures are classified. NIE 95-19 disclosed that it did not account for alternative economic and political futures. NIE 95-19 did address some less likely technical options, including the characteristics and implications of a potential ICBM program of one country. NIE 95-19 had 12 dissents in the estimate. NIE 93-19 and NIE 93-17 had 23 and 2 dissents, respectively. There were qualitative differences in the nature of the dissents in the NIEs. According to a February 1996 statement by the current Chairman of the NIC, “The process for producing NIEs is directed particularly at ensuring presentation of all viewpoints. We do not impose consensus; in fact we encourage the many agencies that participate in NIEs to state their views and we display major differences of view in the main text. Lesser reservations are expressed in footnotes.” While all three NIEs included dissenting views, the dissents were qualitatively different among the NIEs. For example, NIE 93-19’s Key Judgments contained two fundamental disagreements by one department on the overall potential for proliferation of nuclear weapons and on the nuclear weapons program of a specific country. Other dissents in the body of this estimate were also of a fundamental nature. In one instance, one department took an “alternative view” to NIE 93-19’s forecasts about ICBM and space launch vehicle development and transfers. This alternative view from 1993 is very similar to the consensus view of NIE 95-19’s main judgment. Both NIE 95-19 and NIE 93-17 had no dissents in their Key Judgments. The dissents in the body of these NIEs were mostly on technical issues and contained classified information. NIE 95-19 and the 1993 NIEs explicitly noted information gaps at places in the estimates’ text and in a separate Intelligence Gaps section. Estimates should reveal what intelligence analysts do not know that could have significant consequences for the issue under consideration, according to several sources. This disclosure not only helps alert policymakers to the limits of the estimate, but also informs intelligence collectors of needs for further information, according to a former NIC Chairman. In their Intelligence Gaps sections, the three NIEs each noted shortfalls in the IC’s collection of information on the issues they examined. NIE 95-19 worded its judgments on foreign missile threats very differently than did the 1993 NIEs, even though the judgments in all three NIEs were not inconsistent with each other. In addition, the evidence in NIE 95-19 was qualitatively and quantitatively different compared to the 1993 NIEs. Details of other differences and the wording of judgments do not appear in this report because they contain classified information. Finally, the NIEs agreed on several points. NIE 95-19 worded its judgments on foreign missile threats very differently than did the 1993 NIEs, even though the judgments in all three NIEs were not inconsistent with each other. That is, while the judgments were not synonymous, upon careful reading they did not contradict each other. Because the DCI denied us access to officials responsible for the NIEs, we were unable to obtain their reasons for the different wording chosen in the three NIEs. In general, the 1993 NIEs pointed out unfavorable and unlikely outcomes associated with foreign ICBMs more often than did NIE 95-19. A table that compares the exact wording of judgments on foreign missile threats in the three NIEs does not appear in this report because it contains classified information. The evidence in NIE 95-19 is considerably less than that presented in the earlier NIEs, in both quantitative and qualitative terms. Laying out the evidence is important because it allows readers to judge for themselves how much credence to give the judgments, according to a former Vice Chairman of the NIC. In quantitative terms, the earlier NIEs had at least one supporting volume with additional evidence and judgments. Each of the 1993 NIEs was over three times as long as NIE 95-19. The 1993 NIEs backed each of their key judgments with more support than did NIE 95-19. For example, NIE 93-19, which unlike NIE 95-19, was not focused on foreign missile threats, had almost twice the supporting evidence on missile threats than NIE 95-19 did when comparing the same countries. In addition, and in contrast to NIE 95-19, both of the 1993 NIEs referred readers to other IC studies for additional evidence or information. In qualitative terms, we believe the earlier NIEs provided more convincing support for their key judgments. For example, NIE 95-19 stated that “no countries with ICBMs will sell them.” For support, the NIE included one paragraph that cited a multi-national counter-proliferation policy (MTCR) and the theory that countries with ICBMs would probably be concerned that any missiles they sell might be turned against them. The NIE provided very little evidence to support its position that membership in the MTCR (or pledges to abide by the MTCR in China’s case) would necessarily prevent a country from selling missiles. The NIE asserted that the MTCR had helped terminate missile programs in specific countries, but it provided no evidence to support its view. The NIE did not cite additional evidence such as intelligence on whether MTCR members have or have not sold missiles or missile technology in the past, or whether countries have refrained from selling such technology because of the MTCR. In addition, the NIE provided no evidence or detailed analysis to support its position that countries will not sell ICBMs because they would probably fear that the missiles could be turned against them. In contrast to NIE 95-19, the earlier NIEs supported their judgments more thoroughly. Detailed examples contain classified information and do not appear in this report. We were unable to identify the reasons why NIE 95-19 presented less evidence to support its judgments than the 1993 NIEs, because NIC officials refused to meet with us to discuss the preparation of NIE 95-19. The reasons could include limitations on NIE 95-19’s length, its SECRET/Releasable to “Country X” security classification (compared to the TOP SECRET/Codeword classification of the 1993 NIEs), and/or a smaller evidentiary base. In addition to the similarities between the NIEs on some judgments, the NIEs agreed on several other points, including the impact of foreign technology assistance on ICBM development, and the capabilities and intentions of two countries with respect to ICBM development. The conclusions of unclassified government, or government-sponsored, studies on foreign missile threats to the United States were generally consistent with the conclusions of NIE 95-19. However, whereas NIE 95-19’s main judgment was that there will be no new missile threats to the contiguous 48 states during the next 15 years, two studies estimated some possibility—“low” and “quite low”—of such missile threats. The private studies we reviewed differed significantly from NIE 95-19’s assessment of threats; these studies raised more immediate concerns about foreign missile threats to the United States. For example, the Heritage Foundation’s Missile Defense Study Team concluded that ballistic missiles pose a clear, present, and growing threat to the United States. We reviewed several recent unclassified studies on foreign missile threats to the United States and its interests. We identified these studies through a literature search of several databases that include defense and intelligence information. We limited our review to complete studies on this topic, and we did not include newspaper or journal articles. While we compared the conclusions of these studies to NIE 95-19, we did not review the quality of their evidence or attempt to reconcile any differences they had with NIE 95-19. In a November 1993 letter to the Chairman of the House Committee on Armed Services, the CIA provided the declassified findings of its report entitled Prospects for the Worldwide Development of Ballistic Missile Threats to the Continental United States. The study’s scope excluded countries with a current capability to strike the continental United States (CONUS)—China and strategic forces in several states of the former Soviet Union. The study concluded that the “probability is low that any other country will acquire this capability in the next 15 years.” Also, the study found that “no evidence exists that any of the countries examined in this study are developing missiles—especially ICBMs—for the purpose of attacking CONUS.” There were no recommendations identified in the letter. In June 1995, the Congressional Research Service issued a report for the Congress entitled Ballistic and Cruise Missile Forces of Foreign Countries. The report was written by Robert Shuey, a specialist in U.S. foreign policy and national defense. The report stated that “Other than the declared nuclear powers (the United States, China, France, Russia, and the United Kingdom) few countries have long-range missiles.” It also said that North Korea is in the process of developing longer range ballistic missiles, including the Taepo Dong 2. The report concluded that “the production or international transfer of more and better ballistic and cruise missiles will potentially have serious negative implications for the security of U.S. citizens and facilities . . .” The report contained no recommendations. In April 1996, the Office of the Secretary of Defense released a study entitled Proliferation: Threat and Response. The key finding in the report was that the threat was changing from global to regional. The report did not address the current ballistic missile threat to the United States. The report did note, however, that “ . . . unlike during the Cold War, those who possess nuclear, biological, and chemical weapons may actually come to use them.” The report concluded that “The end of the Cold War has reduced the threat of a global nuclear war, but today a new threat is rising from the global spread of nuclear, biological, and chemical weapons.” The report had no recommendations. The report had no indications that there was an increasing missile threat to the United States itself. In February 1993, a report commissioned by the Strategic Defense Initiative Organization of the Department of Defense was released entitled The Emerging Ballistic Missile Threat to the United States. The report was prepared by the Proliferation Study Team, chaired by Lieutenant General William E. Odom, USA (ret.), Director of National Security Studies at the Hudson Institute. The report found that at this point there is no indication that Brazil, India, Italy, Israel, Germany, Japan, and Sweden—countries that possess the potential to develop ICBMs during the 1990s—have any intention of initiating an ICBM program. The report estimated that, if current trends continue, the probability of new ICBM threats during the 1990s or in the very early years of the next decade is quite low. In reaching its conclusion that “the prospects for an increase in ballistic missile threats to the United States during this decade are limited,” the study team identified three uncertainties that affected their ability to forecast confidently 10 to 20 years into the future. First, intelligence indicators are often ambiguous. Second, a number of events could alter the capabilities or intentions of some states to field long-range ballistic missiles. Third, dramatic and rapid changes in U.S. political relations with states possessing or capable of fielding long-range missiles could occur. The report made no recommendations. In July 1991, the Cato Institute published Foreign Policy Briefing No. 10 entitled Countdown to Disaster: The Threat of Ballistic Missile Proliferation. This study was prepared by Channing R. Lukefahr, an associate defense policy analyst at the Cato Institute, as part of the Institute’s regular series evaluating government policies and offering proposals for reform. The key findings of the study were that “As the horizontal proliferation of ballistic missile technology continues, the threat of an accidental launch rises,” and that “while the threat that unstable or antagonistic regimes will achieve the ability to launch intercontinental ballistic missiles . . . moves rapidly toward reality, attempts to reverse that destabilizing trend have been merely exercises in delay.” The study concluded that “the days when weapons of mass destruction and the systems to deliver them are possessed by only the two super-powers . . . are rapidly drawing to a close” and that “although there is no imminent threat to the United States from any of those nations, continuation of that state of affairs cannot be guaranteed . . . an ally can become an enemy in a matter of months.” The report cited stronger secessionist forces in the Soviet Union as undermining the central control of nuclear weapons and making the accidental launch of a few dozen or even a few hundred missiles possible as is the possibility of a limited launch by rogue elements. The report’s sources were congressional testimony and articles in journals, magazines, and newspapers. The report recommended the development and deployment of antiballistic missile systems. In March 1996, the Heritage Foundation released a document entitled Defending America: Ending America’s Vulnerability to Ballistic Missiles. This was an update to a June 1995 report entitled Defending America: A Near- and Long-Term Plan to Deploy Missile Defenses. The Missile Defense Study Team was chaired by Ambassador Henry Cooper, former Director of the Strategic Defense Initiative Organization. The main finding of the reports was that the United States had no defense against ICBMs. The initial report said that ICBMs marketed as space launchers could provide rogue states with the ability to attack the United States. The update cited, but did not identify, authoritative administration officials as having testified to the Congress in May 1995, that rogue states could threaten U.S. cities with long-range missile attacks in 3 to 5 years. The reports concluded that ballistic missiles pose a clear, present, and growing threat to America and her allies overseas. The report recommended a decision to deploy, when technically feasible, the Navy’s Upper Tier interceptor system and the Brilliant Eyes space-based sensor system. The NIC did not comment on our draft report. On July 10, 1996, we wrote to the NIC’s Chairman and requested his views on our draft report. On July 22, 1996, the DCI’s Director of Congressional Affairs replied to us and stated that they would not comment on the substance or accuracy of our draft report because these issues “fall under the purview of intelligence oversight arrangements established by the Congress.” As requested, the DCI’s staff provided us with a security classification review, which we have incorporated into our final report. Our scope included a detailed review of NIE 95-19, and a comparison of this NIE to NIE 93-17, NIE 93-19, and recent unclassified studies. We did not attempt to independently evaluate foreign missile threats to the United States. To assess the objectivity of the NIEs, we used various IC and other sources to develop standards for producing objective NIEs. Then we carefully reviewed NIE 95-19 and the two earlier NIEs to determine whether they met those standards. To compare NIE 95-19 to the 1993 NIEs, we conducted detailed comparisons of the judgments, evidence, and structure of the NIEs. The 1993 NIEs had a different focus than NIE 95-19, so we could not make direct comparisons in some areas. For example, unlike NIE 95-19, the earlier NIEs did not address the Third World cruise missile threat. To compare NIE 95-19 to other unclassified studies, we conducted a variety of literature searches to identify such studies. Where possible, we identified the sources of data used by these studies; however, we did not evaluate the quality of their evidence or attempt to reconcile any differences they had with NIE 95-19. Our scope was significantly impaired by a lack of cooperation by officials from the CIA, NIC, and the Departments of Defense and State. The Departments of Defense and State would not allow us access to their records. Defense and State spokespersons referred us to the DCI on all matters concerning NIEs. On March 6, 1996, we wrote to the DCI’s Director of Congressional Affairs and requested access to CIA and NIC officials and documents. On June 17, 1996, he replied to us and declined to cooperate with our review. His letter argued that our review of certain NIEs would be contrary to oversight arrangements for intelligence that the Congress has established. Specifically, he stated that “such subjects are under the direct purview of Congressional entities that have been charged with overseeing the Intelligence Community.” Therefore, we were unable to discuss preparation of the NIEs with cognizant officials or review supporting documentation at the departments and agencies previously mentioned. Due to this lack of access, we also could not review other NIEs that may have covered similar topics as NIE 95-19. Except as previously mentioned, our review was conducted from April to June 1996 in accordance with generally accepted government auditing standards. At your request, we plan no further distribution of this report until 30 days after its issue date. At that time, we will provide copies to other congressional committees; the Chairman, President’s Foreign Intelligence Advisory Board; the Secretaries of State, Defense, and Energy; Chairman, NIC; and the Director of Central Intelligence. Copies will also be made available to others on request. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report were Gary K. Weeter, Assistant Director; Douglas M. Horner, Evaluator-in-Charge; Stephen L. Caldwell, Senior Evaluator; and James F. Reid, Senior Evaluator. Richard Davis Director, National Security Analysis The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO analyzed the soundness of certain national intelligence estimates (NIE) on the threat to the United States from foreign missile systems, focusing on comparing the content and conclusions of NIE 95-19, which analyzed emerging threats to North America during the next 15 years, with the content and conclusions of two previous NIEs prepared in 1993. GAO found that: (1) the main judgment of NIE 95-19, that no country, other than the major declared nuclear powers, will develop or otherwise acquire a ballistic missile in the next 15 years that could threaten the contiguous 48 states or Canada, was worded with clear, 100-percent certainty; (2) GAO believes this level was overstated, based on the caveats and the intelligence gaps noted in NIE 95-19; (3) NIE 95-19 had additional analytic shortcomings, since it did not: (a) quantify the certainty level of nearly all of its key judgments; (b) identify explicitly its critical assumptions; and (c) develop alternative futures; (4) however, in accordance with standards for producing objective NIEs, NIE 95-19 acknowledged dissenting views from several agencies and also explicitly noted what information the U.S. intelligence community does not know that bears upon the foreign missile threat; (5) the 1993 NIEs met more of the standards than NIE 95-19 did; and (6) NIE 95-19 worded its judgments on foreign missile threats very differently than did the 1993 NIEs, even though the judgments in all three NIEs were not inconsistent with each other, that is, while the judgments were not synonymous, upon careful reading they did not contradict each other.
|
Insurance is a mechanism for spreading risk over time, across large geographical areas, and among industries and individuals. While private insurers assume some financial risk when they write policies, they employ various strategies to manage risk so that they earn profits, limit potential financial exposure, and build capital needed to pay claims. For example, insurers charge premiums for coverage and establish underwriting standards, such as refusing to insure customers who pose unacceptable levels of risk or limiting coverage in particular geographic areas. Insurance companies may also purchase reinsurance to cover specific portions of their financial risk. Reinsurers use strategies similar to primary insurers to limit their risks. Under certain circumstances, the private sector may determine that a risk is uninsurable. For example, homeowner policies typically do not cover flood damage because private insurers are unwilling to accept the risk of potentially catastrophic losses associated with flooding. In other instances, the private sector may be willing to insure a risk, but at rates that are not affordable to many property owners. Without insurance, affected property owners must rely on their own resources or seek out disaster assistance from local, state, and federal sources. In situations where the private sector will not insure a particular type of risk, the public sector may create markets to ensure the availability of insurance. The federal government operates two such programs—the NFIP and the FCIC. NFIP provides insurance for flood damage to homeowners and commercial property owners in more than 20,000 communities. Homeowners with mortgages from federally regulated lenders on property in communities identified as being in high flood risk areas are required to purchase flood insurance on their dwellings. Optional, lower cost flood insurance is also available under the NFIP for properties in areas of lower flood risk. NFIP offers coverage for both the property and its contents, which may be purchased separately. FCIC insures agricultural commodities on a crop-by-crop and county-by-county basis based on farmer demand and the level of risk associated with the crop in a given region. Major crops, such as grains, are covered in almost every county where they are grown, while specialty crops such as fruit are covered only in some areas. Participating farmers can purchase different types of crop insurance and at different levels. Assessments by leading scientific bodies suggest that climate change could significantly alter the frequency or severity of weather-related events, such as drought and hurricanes. Leading scientific bodies report that the Earth warmed during the twentieth century— 1.3 degrees Fahrenheit (0.74 degrees Celsius) from 1906 to 2005 according to a recent IPCC report—and is projected to continue to warm for the foreseeable future. While temperatures have varied throughout history, triggered by natural factors such as volcanic eruptions or changes in the earth’s orbit, the key scientific assessments we reviewed have generally concluded that the observed increase in temperature in the past 100 years cannot be explained by natural variability alone. In recent years, major scientific bodies such as the IPCC, NAS, and the United Kingdom’s Royal Academy have concluded that human activities are significantly increasing the concentrations of greenhouse gases and, in turn, global temperatures. Assuming continued growth in atmospheric concentration of greenhouse gases, the latest assessment of computer climate models projects that average global temperatures will warm by an additional 3.2 to 7.2 degrees Fahrenheit (1.8 to 4.0 degrees Celsius) during the next century. Based on model projections and expert judgment, the IPCC reported that future increases in the earth’s temperature are likely to increase the frequency and severity of many damaging extreme weather-related events (summarized in table 1). The IPCC recently published summaries of two of the three components of its Fourth Assessment Report. The first, in which IPCC summarized the state of the physical science, reports higher confidence in projected patterns of warming and other regional-scale features, including changes in wind patterns, precipitation, and some aspects of extreme events such as drought, heavy precipitation events, and hurricanes. The second, in which IPCC addresses climate impacts and vulnerabilities, reported that the potential societal impacts from changes in temperature and extreme events vary widely across sector and region. For example, although the IPCC projects moderate climate change may increase yields for some rain-fed crops, crops that are near their warm temperature limit or depend on highly-used water resources face many challenges. Additionally, local crop production in any affected area may be negatively impacted by projected increases in the frequency of droughts or floods. Furthermore, the IPCC stated that the economic and social costs of extreme weather events will increase as these events become more intense and/or more frequent. Rapidly-growing coastal areas are particularly vulnerable, and the IPCC notes that readiness for increased exposure in these areas is low. These reports have not been publicly released in their entirety, but are expected sometime after May 2007. In addition to the IPCC’s work, CCSP is assessing potential changes in the frequency or intensity of weather-related events specific to North America in a report scheduled for release in 2008. According to a National Oceanic and Atmospheric Administration official and agency documents, the report will focus on weather extremes that have a significant societal impact, such as extreme cold or heat spells, tropical and extra-tropical storms, and droughts. Importantly, officials have said the report will provide an assessment of the observed changes in weather and climate extremes, as well as future projections. Based on an examination of loss data from several different sources, we found that insurers incurred about $321.2 billion in weather-related losses from 1980 through 2005. In particular, as illustrated in Figure 1, our analysis found that weather-related losses accounted for 88 percent of all property losses paid by insurers during this period. All other property losses, including those associated with earthquakes and terrorist events, accounted for the remainder. Weather-related losses varied significantly from year to year, ranging from just over $2 billion in 1987 to more than $75 billion in 2005. Private insurers paid $243.5 billion—over 75 percent of the total weather- related losses we reviewed. The two major federal insurance programs— NFIP and FCIC—paid the remaining $77.7 billion of the $321.2 billion in weather-related loss payments we reviewed. NFIP paid about $34.1 billion, or about 11 percent of the total weather-related loss payments we reviewed during this period. As illustrated in Figure 2, claims averaged about $1.3 billion per year, but ranged from $75.7 million in 1988 to $16.7 billion in 2005. Since 1980, FCIC claims totaled $43.6 billion, or about 14 percent of all weather-related claims during this period. As illustrated in Figure 3, FCIC losses averaged about $1.7 billion per year, ranging from $531.8 million in 1987 to $4.2 billion in 2002. The largest insured losses in the data we reviewed were associated with catastrophic weather events. Notably, crop insurers and other property insurers both face catastrophic weather-related risks, although the nature of the events for each is very different. In the case of crop insurance, drought accounted for more than 40 percent of weather-related loss payments from 1980 to 2005, and the years with the largest losses were associated with drought. Taken together, though, hurricanes were the most costly event in the data we reviewed. Although the United States experienced an average of only two hurricanes per year from 1980 through 2005, weather-related claims attributable to hurricanes totaled more than 45 percent of all weather-related losses—more than $146 billion. Moreover, as illustrated in Table 2, these losses appear to have increased during the past three decades. Several recent studies have commented on the apparent increases in hurricane losses during this time period, and weather-related disaster losses generally, with markedly different interpretations. Some argue that loss trends are largely explained by changes in societal and economic factors, such as population density, cost of building materials, and the structure of insurance policies. Others argue that increases in losses have been driven by changes in climate. To address the issue, Munich Re—one of the world’s largest reinsurance companies—and the University of Colorado’s Center for Science and Technology Policy Research jointly convened a workshop in Germany in May 2006 to assess factors leading to increasing weather-related losses. The workshop brought together a diverse group of international experts in the fields of climatology and disaster research. Workshop participants agreed that long-term records of disaster losses indicate that societal change and economic development are the principal factors explaining weather-related losses. However, participants also agreed that changing patterns of extreme events are drivers for recent increases in losses, and that additional increases in losses are likely, given IPCC’s projections. The close relationship between the value of the resource exposed to weather-related losses and the amount of damage incurred may have ominous implications for a nation experiencing rapid growth in some of its most disaster-prone areas. AIR Worldwide, a leading catastrophe modeling firm, recently reported that insured losses should be expected to double roughly every 10 years because of increases in construction costs, increases in the number of structures, and changes in their characteristics. AIR’s research estimates that, because of exposure growth, probable maximum catastrophe loss—an estimate of the largest possible loss that may occur, given the worst combination of circumstances—grew in constant 2005 dollars from $60 billion in 1995 to $110 billion in 2005, and it will likely grow to over $200 billion during the next 10 years. Major private and federal insurers are responding differently to the prospect of increasing weather-related losses associated with climate change. Many large private insurers are incorporating both near and longer-term elements of climatic change into their risk management practices. On the other hand, for a variety of reasons, the federal insurance programs have done little to develop the kind of information needed to understand the programs’ long-term exposure to climate change. Catastrophic weather events pose a unique financial threat to private insurers’ financial success because a single event can cause insolvency or a precipitous drop in earnings, liquidation of assets to meet cash needs, or a downgrade in the market ratings used to evaluate the soundness of companies in the industry. To prevent these disruptions, the American Academy of Actuaries (AAA)—the professional society that establishes, maintains, and enforces standards of qualification, practice, and conduct for actuaries in the United States—recommends, among other steps, that insurers measure their exposure to catastrophic weather-related risk. In particular, AAA emphasizes the shortcomings of estimating future catastrophic risk by extrapolating solely from historical losses, and endorses a more rigorous approach that incorporates underlying trends and factors in weather phenomena and current demographic, financial, and scientific data to estimate losses associated with various weather- related events. In our interviews with eleven of the largest private insurers operating in the U.S. property casualty insurance market, we sought to determine what key private insurers are doing to estimate and prepare for risks associated with potential climatic changes arising from natural or human factors. Representatives from each of the 11 major insurers we interviewed told us they incorporate near-term increases in the frequency and intensity of hurricanes into their risk estimates. Six specifically attributed the higher frequency and intensity of hurricanes to a 20- to 40-year climatic cycle of fluctuating temperatures in the north Atlantic Ocean, while the remaining five insurers did not elaborate on the elements of climatic change driving the differences in hurricane characteristics. In addition to managing their aggregate exposure on a near-term basis, some of the world’s largest insurers have also taken a longer-term strategic approach to changes in catastrophic risk. Six of the eleven private insurers we interviewed reported taking one or more additional actions when asked if their company addresses climatic change in their weather-related risk management processes. These activities include monitoring scientific research (4 insurers), simulating the impact of a large loss event on their portfolios (3 insurers), and educating others in the industry about the risks of climatic change (3 insurers), among others. Moreover, major insurance and reinsurance companies, such as Allianz, Swiss Re, Munich Re, and Lloyds of London, have published reports that advocate increased industry awareness of the potential risks of climate change, and outline strategies to address the issue proactively. NFIP and FCIC have not developed information on the programs’ longer- term exposure to the potential risk of increased extreme weather events associated with climate change as part of their risk management practices. The goals of the key federal insurance programs are fundamentally different from those of private insurers. Whereas private insurers stress the financial success of their business operations, the statutes governing the NFIP and FCIC promote affordable coverage and broad participation by individuals at risk over the programs’ financial self-sufficiency by offering discounted or subsidized premiums. Also unlike the private sector, the NFIP and the FCIC have access to additional federal funds during high-loss years. Thus, neither program is required to assess and limit its catastrophic risk strictly within its ability to pay claims on an annual basis. Instead, to the extent possible, each program manages its risk within the context of its broader purposes in accordance with authorizing statutes and implementing regulations. Nonetheless, an improved understanding of the programs’ financial exposure is becoming increasingly important. Notably, the federal insurance programs’ liabilities have grown significantly, which leaves the federal government increasingly vulnerable to the financial impacts of catastrophic events. Data obtained from both the NFIP and FCIC programs indicate the federal government has grown markedly more exposed to weather-related losses. Figure 4 illustrates the growth of both program’s exposure from 1980 to 2005. For NFIP, the program’s total coverage increased fourfold in constant dollars during this time from about $207 billion to $875 billion in 2005 due to increasing property values and a doubling of the number of policies from 1.9 million to more than 4.6 million. The FCIC has effectively increased its exposure base 26-fold during this period. In particular, the program has significantly expanded the scope of crops covered and increased participation. The main implication of the exposure growth for both the programs is that the magnitude of potential claims, in absolute terms, is much greater today than in the past. Neither program has assessed the implications of a potential increase in the frequency or severity of weather-related events on program operations, although both programs have occasionally attempted to estimate their aggregate losses from potential catastrophic events. For example, FCIC officials stated that they had modeled past events, such as the 1993 Midwest Floods, using current participation levels to inform negotiations with private crop insurers over reinsurance terms. However, NFIP and FCIC officials explained that these efforts were informal exercises, and were not performed on a regular basis. Furthermore, according to NFIP and FCIC officials, both programs’ estimates of weather-related risk rely heavily on historical weather patterns. As one NFIP official explained, the flood insurance program is designed to assess and insure against current— not future—risks. Over time, agency officials stated, this process has allowed their programs to operate as intended. However, unlike private sector insurers, neither program has conducted an analysis of the potential impacts of an increase in the frequency or severity of weather-related events on continued program operations in the long-term. While comprehensive information on federal insurers’ long-term exposure to catastrophic risk associated with climate change may not inform the NFIP’s or FCIC’s day-to-day operations, it could nonetheless provide valuable information for the Congress and other policy-makers who need to understand and prepare for fiscal challenges that extend well beyond the two programs’ near-term operational horizons. We have highlighted the need for this kind of strategic information in recent reports that have expressed concern about the looming fiscal imbalances facing the nation. In particular, we observed that, “Our policy process will be challenged to act with more foresight to take early action on problems that may not constitute an urgent crisis but pose important long-term threats to the nation’s fiscal, economic, security, and societal future.” The prospect of increasing program liabilities, coupled with expected increases in frequency and severity of weather events associated with climate change, would appear to fit into this category. Agency officials identified several challenges that could complicate their efforts to assess these impacts at the program level. Both NFIP and FCIC officials stated there was insufficient scientific information on projected impacts at the regional and local level to accurately assess their impact on the flood and crop insurance programs. However, members of the insurance industry have analyzed and identified the potential risks climatic change poses to their business, despite similar challenges. Moreover, as previously discussed, both the IPCC and CCSP are expected to release significant assessments of the likely effect of increasing temperatures on weather events in coming months. The experience of many private insurers, who must proactively respond to longer-term changes in weather-related risk to remain solvent, suggests the kind of information that needs to be developed to make sound strategic decisions. Specifically, to help ensure their future viability, a growing number of private insurers are actively incorporating the potential for climate change into their strategic level analyses. In particular, some private insurers have run a variety of simulation exercises to determine the potential business impact of an increase in the frequency and severity of weather events. For example, one insurer simulated the impact of multiple large weather events occurring simultaneously. We believe a similar analysis could provide Congress with valuable information about the potential scale of losses facing the NFIP and FCIC in coming decades, particularly in light of the programs’ expansion over the past 25 years. We believe that the FCIC and NFIP are uniquely positioned to provide strategic information on the potential impacts of climate change on their programs—information that would be of value to key decision makers charged with a long-term focus on the nation’s fiscal health. Most notably, in exercising its oversight responsibilities, the Congress could use such information to examine whether the current structure and incentives of the federal insurance programs adequately address the challenges posed by potential increases in the frequency and severity of catastrophic weather events. While the precise content of these analyses can be debated, the activities of many private insurers already suggest a number of strong possibilities that may be applicable to assessing the potential implications of climate change on the federal insurance programs. Accordingly, our report recommended that the Secretary of Agriculture and the Secretary of Homeland Security direct the Administrator of the Risk Management Agency and the Under Secretary of Homeland Security for Emergency Preparedness assess the potential long-term implications of climate change for the FCIC and the NFIP, respectively, and report their findings to the Congress. This analysis should use forthcoming assessments from the Climate Change Science Program and the Intergovernmental Panel on Climate Change to establish sound estimates of expected future conditions. Both agencies expressed agreement with this recommendation. In addition, at an April 19, 2007, hearing on our report convened by the Senate Homeland Security and Governmental Affairs Committee, Chairman Joseph Lieberman and Ranking Member Susan Collins directed the agencies to provide the Committee a deadline by which they plan to transmit this assessment to the Congress in fulfillment of this recommendation. Chairman Lieberman also asked the agencies to prepare and disseminate this assessment independent of any annual reports to the Congress. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have. For further information about this testimony, please contact me, John Stephenson, at 202-512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Steve Elstein, Assistant Director; Chase Huntley; Micah McMillan; Alison O’Neill; Kate Robertson; and Lisa Van Arsdale. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Weather-related events in the United States have caused tens of billions of dollars in damages annually over the past decade. A major portion of these losses is borne by private insurers and by two federal insurance programs-- the Federal Emergency Management Agency's National Flood Insurance Program (NFIP), which insures properties against flooding, and the Department of Agriculture's Federal Crop Insurance Corporation (FCIC), which insures crops against drought or other weather disasters. In this testimony, GAO (1) describes how climate change may affect future weather-related losses, (2) provides information on past insured weather-related losses, and (3) determines what major private insurers and federal insurers are doing to prepare for potential increases in such losses. This testimony is based on a report entitled Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades are Potentially Significant (GAO-07-285) released on April 19, 2007. Key scientific assessments report that the effects of climate change on weather-related events and, subsequently, insured and uninsured losses, could be significant. The global average surface temperature has increased over the past century and climate models predict even more substantial, perhaps accelerating, increases in temperature in the future. Assessments by key governmental bodies generally found that rising temperatures are expected to increase the frequency and severity of damaging weather-related events, such as flooding or drought, although the timing and magnitude are as yet undetermined. Additional research on the effect of increasing temperatures on weather events is expected in the near future. Taken together, private and federal insurers paid more than $320 billion in claims on weather-related losses from 1980 to 2005. Claims varied significantly from year to year--largely due to the effects of catastrophic weather events such as hurricanes and droughts--but have generally increased during this period. The growth in population in hazard-prone areas and resulting real estate development have generally increased liabilities for insurers, and have helped to explain the increase in losses. Due to these and other factors, federal insurers' exposure has grown substantially. Since 1980, NFIP's exposure nearly quadrupled to nearly $1 trillion in 2005, and program expansion increased FCIC's exposure 26-fold to $44 billion. Major private and federal insurers are both exposed to the effects of climate change over coming decades, but are responding differently. Many large private insurers are incorporating climate change into their annual risk management practices, and some are addressing it strategically by analyzing its potential long-term industry-wide impacts. In contrast, federal insurers have not developed and disseminated comparable information on long-term financial impacts. GAO acknowledges that the federal insurance programs are not profit-oriented, like private insurers. Nonetheless, a strategic assessment of the potential implications of climate change for the major federal insurance programs would help the Congress manage an emerging high-risk area with significant implications for the nation's growing long-term fiscal imbalance.
|
Crude oil prices are the fundamental determinant of gasoline prices. As figure 1 shows, crude oil and gasoline prices have generally followed a similar path over the past three decades and have risen considerably over the past few years. Refining capacity also plays a role in determining how gasoline prices vary across different locations and over time. Refinery capacity in the United States has not expanded at the same pace as demand for gasoline and other petroleum products in recent years. The American Petroleum Institute recently reported that U.S. average refinery capacity utilization has increased to 92 percent. As a result, domestic refineries have little room to expand production in the event of a temporary supply shortfall. Furthermore, the fact that imported gasoline comes from farther away than domestically produced gasoline means that when supply disruptions occur in the United States it might take longer to get replacement gasoline than if we had excess refining capacity in the United States. This could cause gasoline prices to rise and stay high until the imported supplies can reach the market. Gasoline inventories maintained by refiners or marketers of gasoline can also have an impact on prices. As have a number of other industries, the petroleum products industry has adopted so-called “just-in-time” delivery processes to reduce costs leading to a downward trend in the level of gasoline inventories in the United States. For example, in the early 1980s private companies held stocks of gasoline in excess of 35 days of average U.S. consumption, while in 2004 these stocks were equivalent to less than 25 days consumption. While lower costs of holding inventories may reduce gasoline prices, lower levels of inventories may also cause prices to be more volatile because when a supply disruption occurs, there are fewer stocks of readily available gasoline to draw from, putting upward pressure on prices. Regulatory factors also play a role. For example, in order to meet national air quality standards under the Clean Air Act, as amended, many states have adopted the use of special gasoline blends—so-called “boutique fuels.” As we reported in a recent study, there is a general consensus that higher costs associated with supplying special gasoline blends contribute to higher gasoline prices, either because of more frequent or more severe supply disruptions, or because higher costs are likely passed on, at least in part, to consumers. Finally, the structure of the gasoline market can play a role in determining prices. For example, mergers raise concerns about potential anticompetitive effects because mergers could result in greater market power for the merged companies, potentially allowing them to increase prices above competitive levels. On the other hand, mergers could also yield cost savings and efficiency gains, which may be passed on to consumers through lower prices. Ultimately, the impact depends on whether market power or efficiency dominates. During the 1990s, the U.S. petroleum industry experienced a wave of mergers, acquisitions, and joint ventures, several of them between large oil companies that had previously competed with each other for the sale of petroleum products. More than 2,600 merger transactions have occurred since 1991 involving all three segments of the U.S. petroleum industry. Almost 85 percent of the mergers occurred in the upstream segment (exploration and production), while the downstream segment (refining and marketing of petroleum) accounted for about 13 percent, and the midstream segment (transportation) accounted for about 2 percent. The vast majority of the mergers—about 80 percent—involved one company’s purchase of a segment or asset of another company, while about 20 percent involved the acquisition of a company’s total assets by another so that the two became one company. Most of the mergers occurred since the second half of the 1990s, including those involving large partially or fully vertically integrated companies. For example, in 1998 British Petroleum (BP) and Amoco merged to form BPAmoco, which later merged with ARCO, and in 1999 Exxon, the largest U.S. oil company merged with Mobil, the second largest. Since 2000, we found that at least 8 large mergers have occurred. Some of these mergers have involved major integrated oil companies, such as the Chevron- Texaco merger, announced in 2000, to form ChevronTexaco, which went on to acquire Unocal in 2005. In addition, Phillips and Tosco announced a merger in 2001 and the resulting company, Phillips, then merged with Conoco to become ConocoPhillips. Independent oil companies have also been involved in mergers. For example, Devon Energy and Ocean Energy, two independent oil producers, announced a merger in 2003 to become the largest independent oil and gas producer in the United States. Petroleum industry officials and experts we contacted cited several reasons for the industry’s wave of mergers since the 1990s, including increasing growth, diversifying assets, and reducing costs. Economic literature indicates that enhancing market power is also sometimes a motive for mergers, which could reduce competition and lead to higher prices. Ultimately, these reasons mostly relate to companies’ desire to maximize profits or stock values. Mergers in the 1990s contributed to increases in market concentration in the refining and marketing segments of the U.S. petroleum industry, while the exploration and production segment experienced little change in concentration. Econometric modeling we performed of eight mergers that occurred in the 1990s showed that the majority resulted in small wholesale gasoline price increases. The effects of some of the mergers were inconclusive, especially for boutique fuels sold in the East Coast and Gulf Coast regions and in California. While we have not performed modeling on mergers that occurred since 2000, and thus cannot comment on any potential additional effect on wholesale gasoline prices, these mergers would further increase market concentration nationwide since there are now fewer oil companies. Proposed mergers in all industries are generally reviewed by federal antitrust authorities—including the Federal Trade Commission (FTC) and the Department of Justice (DOJ)—to assess the potential impact on market competition and consumer prices. According to FTC officials, FTC generally reviews proposed mergers involving the petroleum industry because of the agency’s expertise in that industry. To help determine the potential effect of a merger on market competition, FTC evaluates, among other factors, how the merger would change the level of market concentration. Conceptually, when market concentration is higher, the market is less competitive and it is more likely that firms can exert control over prices. DOJ and FTC have jointly issued guidelines to measure market concentration. The scale is divided into three separate categories: unconcentrated, moderately concentrated, and highly concentrated. The index of market concentration in refining increased all over the country during the 1990s, and changed from moderately to highly concentrated on the East Coast. In wholesale gasoline markets, market concentration increased throughout the United States between 1994 and 2002. Specifically, 46 states and the District of Columbia had moderately or highly concentrated markets by 2002, compared to 27 in 1994. While market concentration is important, other aspects of the market that may be affected by mergers also play an important role in determining the level of competition in a market. These aspects include barriers to entry, which are market conditions that provide established sellers an advantage over potential new entrants in an industry, and vertical integration. Mergers may have also contributed to changes in these aspects. However, we could not quantify the extent of these changes because of a lack of relevant data. To estimate the effect of mergers on wholesale gasoline prices, we performed econometric modeling on eight mergers that occurred during the 1990s: Ultramar Diamond Shamrock (UDS)-Total, Tosco-Unocal, Marathon-Ashland, Shell-Texaco I (Equilon), Shell-Texaco II (Motiva), BP- Amoco, Exxon-Mobil, and Marathon Ashland Petroleum (MAP)-UDS. For the seven mergers that we modeled for conventional gasoline, five led to increased prices, especially the MAP-UDS and Exxon-Mobil mergers, where the increases generally exceeded 2 cents per gallon, on average. For the four mergers that we modeled for reformulated gasoline, two— Exxon-Mobil and Marathon-Ashland—led to increased prices of about 1 cent per gallon, on average. In contrast, the Shell-Texaco II (Motiva) merger led to price decreases of less than one-half cent per gallon, on average, for branded gasoline only. For the two mergers—Tosco-Unocal and Shell-Texaco I (Equilon)—that we modeled for gasoline used in California, known as California Air Resources Board (CARB) gasoline, only the Tosco-Unocal merger led to price increases. The increases were for branded gasoline only and were about 7 cents per gallon, on average. Our analysis shows that wholesale gasoline prices were also affected by other factors included in the econometric models, including gasoline inventories relative to demand, supply disruptions in some parts of the Midwest and the West Coast, and refinery capacity utilization rates. Our past work has shown that, crude oil price is the fundamental determinant of gasoline prices. Refinery capacity, gasoline inventory levels and regulatory factors also play important roles. In addition, merger activity can influence gasoline prices. During the 1990s, mergers decreased the number of oil companies and refiners and our findings suggest that this change caused wholesale prices to rise. The impact of more recent mergers is unknown. While we have not performed modeling on mergers that occurred since 2000, and thus cannot comment on any potential additional effect on wholesale gasoline prices, these mergers would further increase market concentration nationwide since there are now fewer oil companies. Our analysis of mergers during the 1990s differs from the approach taken by the FTC in reviewing potential mergers because our analysis was retrospective in nature—looking at actual prices and estimating the impacts of individual mergers on those prices—while FTC’s review of mergers takes place necessarily before the mergers. Going forward, we believe that, in light of our findings, both forward looking and retrospective analysis of the effects of mergers on gasoline prices are necessary to ensure that consumers are protected from anticompetitive forces. In addition, we welcome this hearing as an opportunity for continuing public scrutiny and discourse on this important issue. We encourage future independent analysis by the FTC or other parties, and see value in oversight of the regulatory agencies in carrying out their responsibilities. Regardless of the causes, high gasoline prices specifically, and high energy prices in general are a challenge for the nation. Rising demand for energy in the United States and across the world will put upward pressure on prices with potentially adverse economic impacts. Clearly none of the options for meeting the nation’s energy needs are without tradeoffs. Current U.S. energy supplies remain highly dependent on fossil energy sources that are costly, imported, potentially harmful to the environment, or some combination of these three, while many renewable energy options are currently more costly than traditional options. Striking a balance between efforts to boost supplies from alternative energy sources and policies and technologies focused on improved efficiency of petroleum burning vehicles or on overall energy conservation present challenges as well as opportunities. How we choose to meet the challenges and seize the opportunities will help determine our quality of life and economic prosperity in the future. We are currently studying gasoline prices in particular, and the petroleum industry more generally, including an analysis of the viability of the Strategic Petroleum Reserve, an evaluation of world oil reserves, and an assessment of U.S. contingency plans should oil imports from a major oil producing country, such as Venezuela, be disrupted. With this body of work, we will continue to provide Congress and the American people the information needed to make informed decisions on energy that will have far-reaching effects on our economy and our way of life. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 (or at [email protected] ). Godwin Agbara, Samantha Gross, John Karikari, and Frank Rusco made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Soaring retail gasoline prices, increased oil company profits, and mergers of large oil companies have garnered extensive media attention and generated considerable public concern. Gasoline prices impact the economy because of our heavy reliance on motor vehicles. According to the Department of Energy's Energy Information Administration (EIA), each additional ten cents per gallon of gasoline adds about $14 billion to America's annual gasoline bill. Given the importance of gasoline for the nation's economy, it is essential to understand the market for gasoline and how prices are determined. In this context, this testimony addresses the following questions: (1) What factors affect gasoline prices? (2) What has been the pattern of oil company mergers in the United States in recent years? (3) What effects have mergers had on market concentration and wholesale gasoline prices? To address these questions, GAO relied on previous reports, including (1) a 2005 GAO primer on gasoline prices, (2) a 2005 GAO report on the proliferation of special gasoline blends, and (3) a 2004 GAO report on mergers in the U.S. petroleum industry. GAO also collected updated data from a number of sources that we deemed reliable. This work was performed in accordance with generally accepted government auditing standards. Crude oil prices are the major determinant of gasoline prices. A number of other factors also affect gasoline prices including (1) refinery capacity in the United States, which has not expanded at the same pace as demand for gasoline and other petroleum products in recent years; (2) gasoline inventories maintained by refiners or marketers of gasoline, which as with trends in a number of other industries, have seen a general downward trend in recent years; and (3) regulatory factors, such as national air quality standards, that have induced some states to switch to special gasoline blends that have been linked to higher gasoline prices. Finally, the structure of the gasoline market can play a role in determining prices. For example, mergers raise concerns about potential anticompetitive effects because mergers could result in greater market power for the merged companies, potentially allowing them to increase prices above competitive levels. During the 1990s, the U.S. petroleum industry experienced a wave of mergers, acquisitions, and joint ventures, several of them between large oil companies that had previously competed with each other for the sale of petroleum products. During this period, more than 2,600 merger transactions occurred--almost 85 percent of the mergers occurred in the upstream segment (exploration and production), while the downstream segment (refining and marketing of petroleum) accounted for about 13 percent, and the midstream segment (transportation) accounted for about 2 percent. Since 2000, we found that at least 8 additional mergers have occurred, involving different segments of the industry. Petroleum industry officials and experts we contacted cited several reasons for the industry's wave of mergers since the 1990s, including increasing growth, diversifying assets, and reducing costs. Mergers in the 1990s contributed to increases in market concentration in the refining and marketing segments of the U.S. petroleum industry, while the exploration and production segment experienced little change in concentration. GAO evaluated eight mergers that occurred in the 1990s after they had been reviewed by the FTC--the FTC generally reviews proposed mergers involving the petroleum industry and only approves such mergers if they are deemed not to have anticompetitive effects. GAO's econometric modeling of these mergers showed that the majority resulted in small wholesale gasoline price increases. While mergers since 2000 also increased market concentration, we have not performed modeling on more recent mergers and thus cannot comment on any potential additional effect on wholesale gasoline prices.
|
Homeowners insurance provides consumers with financial protection against unexpected losses. Most homeowners policies provide a package of coverage that protects against damage to different types of property and liability for injuries and property damage policyholders cause to others. The following are the main types of coverage: Dwelling. Pays for damage to a house and to attached structures, including plumbing, electrical wiring, heating, and permanently installed air-conditioning systems. Other structures. Pays for damage to fences, tool sheds, freestanding garages, guest cottages, and other structures not attached to the dwelling. Personal property. Pays for the value of personal possessions, including furniture, electronics, appliances, and clothing damaged or lost even when they are not on the subject property—for instance, when they are in an off-site storage locker or with a child at college. Loss of use. Pays some additional living expenses while a home is being repaired. Personal liability. Pays for financial loss from lawsuits for injuries or damages to someone else. Medical payments. Pays medical bills for people hurt on the property or by a pet. Homeowners can purchase several types of insurance policies. These policies differ based on the perils they cover (see table 1). For example, named perils policies insure against losses caused by perils that are specifically listed in the policies. Open peril policies are broader policies that insure against losses caused by all perils, except those that are specifically excluded. However, according to industry participants with whom we spoke, currently no homeowners policies are available that cover every possible peril a homeowner could face. In addition to different policy types, homeowners may choose between three different levels of coverage for their homeowners policies. These options are: Actual cash value. This type of coverage pays to replace the home or possessions after deducting for depreciation. Value is determined by taking into consideration the age of the home and wear and tear. This level of coverage may not be enough to fully repair or replace the damage. Replacement cost. This type of coverage pays the cost of rebuilding or repairing the home or replacing possessions, up to the coverage limit, without a deduction for depreciation. It allows for the repair or rebuilding of the home by using materials of similar kind and quality. Guaranteed (or extended) replacement cost. This option is the most comprehensive and expensive. It pays a certain percentage, typically 20 to 25 percent, over the coverage limit to rebuild the home in the event that materials and labor costs increase as a result of a widespread disaster. The HO-2, HO-3, and HO-5 policies typically provide replacement cost coverage on the structures. Contents coverage under homeowner policies is typically provided on an actual cash value basis, unless the replacement cost option is purchased. The HO-8 typically provides coverage for older homes on an actual cash value basis. Full replacement cost policies may not be available for some older homes. Several factors affect the premiums consumers pay for their homeowners policies, including the type and characteristics of the home. For example, homes that are primarily brick or masonry typically have lower premiums than wood frame homes, and older homes and homes in poor condition tend to have higher premiums than newer homes and homes in good condition. A homeowner’s characteristics, such as a history of filing claims, and choices, such as the dollar amount of coverage selected, may also impact the premium cost. Other factors that impact the premium cost include the degree of exposure to catastrophes (such as hurricanes or earthquakes), the type of protection devices in the home (such as sprinkler or security systems), and the type of structures on the property (such as swimming pools or trampolines). In addition to paying premiums for their insurance policies, homeowners generally have to pay a deductible when they file a claim—that is, an amount of money a policyholder must pay before an insurance policy will pay for a loss. The deductible applies to both home and personal property coverage and is paid on each claim. Higher deductibles generally mean lower policy premiums. In some locations, there are also catastrophe deductibles that a homeowner must pay when a major natural disaster occurs. They are expressed as a percentage of a claim instead of an insured amount. Insurance in the United States is primarily regulated at the state level. State insurance regulators are responsible for enforcing state insurance laws and regulations, including through the licensing of agents, the review of insurance products and premium rates, and the examination of insurers’ financial solvency and market conduct. The insurance regulators of the 50 states, the District of Columbia, and the U.S. territories created and govern the National Association of Insurance Commissioners (NAIC), which is the standard-setting and regulatory support organization for the U.S. insurance industry. Through the NAIC, state insurance regulators establish standards and best practices, conduct peer review, and coordinate their regulatory oversight. NAIC staff supports these efforts and represents the collective views of state regulators domestically and internationally. NAIC members, together with the central resources of the NAIC, form the national system of state-based regulation in the United States. Insurers assume some financial risk when writing policies, but also employ various strategies to manage risk so they can earn profits, limit potential financial exposures, and build the capital needed to pay claims. For example, insurance companies establish underwriting standards, such as refusing to insure customers who pose unacceptable levels of risk, or limiting coverage in particular geographic areas. Insurance companies may also purchase reinsurance, or insurance for insurance companies, to cover specific portions of their financial risk. For catastrophic losses, insurers may also sell financial instruments such as catastrophe bonds. Reinsurers use similar strategies to manage their risks. Both insurers and reinsurers must also predict the frequency and severity of insured losses with some reliability to best manage financial risk. In some cases, these events can be fairly predictable. For example, the incidence of most automobile claims is predictable, and losses generally do not occur to large numbers of policyholders at the same time. However, some infrequent events, such as hurricanes, are so severe that they pose unique challenges for insurers and reinsurers. The unpredictability and sheer size of these types of events can result in substantial losses that deplete insurers’ and reinsurers’ capital. If a company believes that the risk of loss is unacceptably high given the rate that can be charged, it may decide not to offer coverage. If the private sector will not insure a particular type of risk, the public sector may create markets to ensure the availability of insurance. For example, several states have established Fair Access to Insurance Requirements (FAIR) plans, which pool resources from insurers doing business in the state to make property insurance available to property owners who cannot obtain coverage in the private insurance market, or cannot do so at an affordable rate. In addition, some states have established windstorm insurance pools that pool resources from private insurers to make insurance for wind risks available to property owners who cannot obtain it in the private insurance market. At the federal level, Congress established NFIP in 1968 to provide flood coverage to homeowners where voluntary markets do not readily exist. FEMA is responsible for the oversight and management of NFIP. Designed to help reduce the cost of federal assistance after floods, NFIP may be the sole source of insurance to some residents of flood-prone areas. Participating communities are required to adopt and enforce floodplain management regulations, thereby reducing the risks of flooding and the costs of repairing flood damage. Under the program, the federal government assumes the liability for covered losses and sets premium rates and coverage limitations. Like private insurance companies, federal and state government insurance programs also collect premiums, but their rates do not always reflect the risks that the programs assume, in part because they are designed to keep insurance affordable for most homeowners. Homeowners policies provide protections against a number of perils that can impact individuals and families. Policies, however, do not protect against all perils that homeowners could face. Various sections of a homeowners insurance policy outline perils covered and those the policy excludes. Policy provisions and other factors, such as location, can impact the coverage insurers offer homeowners. Homeowners insurance policies typically cover a range of perils and are critical to providing financial protection against losses such as fire and theft, among others. According to many industry participants, the Insurance Services Office’s (ISO) HO-3 policy is the most commonly purchased homeowners policy and outlines the typical coverage and exclusions found in most homeowners policies. The HO-3 covers a person’s dwelling or home and other structures—such as detached garage, or shed—against all perils except those specifically excluded. This is commonly known as “open perils” coverage. For possessions, or personal property, the HO-3 covers only those perils listed in the policy, typically the 16 outlined below in table 2. Insurers refer to this kind of coverage as “named peril” coverage. Private insurers determine which perils are insurable risks on the basis of certain characteristics or criteria of the losses associated with them, including the following: Losses that are definite and measurable. The loss should be definite or determinate in time, place, and cause, and the insurer must be capable of setting a dollar value on the amount of the loss. Losses that are sudden, random, and accidental. The loss must result from chance and not be something that is certain to happen. If a future loss were sure to occur, coverage would have to be priced at the full value of the loss plus an additional amount for the expenses incurred. Losses that are not catastrophic. The losses should not affect a very large percentage of an insurance company’s policyholders at the same time in, for example, a limited geographic area. The losses should be independent of each other in order to spread and minimize risk. Additionally, the peril should not be so catastrophic that the insurer would be unable to charge a sufficient premium to cover the exposure. Losses for which the law of large numbers applies. There must be a sufficiently large number of homogeneous units exposed to random losses, both historically and prospectively, to make the future losses reasonably predictable. This principle works best when there are many losses with similar characteristics spread across a large group. The greater the experience with losses, the better insurers can estimate both the frequency and severity of future losses. When these criteria are generally satisfied, the insurer can add other expenses and profits to the expected losses and determine a price that is appropriate for the risk. Insurers may still decide to offer insurance for risks that deviate from these ideal characteristics. However, as one or more deviations occur, the ability of the insurer to estimate future losses decreases, the risk increases, and the insurer’s capital is more exposed to inadequate prices for the coverage that the insurer offers. Despite covering a range of perils, insurers also exclude a number of perils. Insurance companies may determine that a peril is uninsurable and exclude it from an insurance policy by inserting a provision that excludes coverage. These provisions can be located in different sections of a homeowners policy. For example, the HO-3 contains a section titled “Exclusions,” but other policy provisions that exclude coverage are located elsewhere. Table 3 lists typical perils excluded under this section. Other policy provisions that exclude coverage are located under “Perils Insured Against” in the HO-3 policy. Under this section, which applies to the dwelling and other structures, but not to personal property, damage caused by certain perils will not be covered (see table 4). Insurers exclude these perils from homeowners policies for various reasons. According to some industry participants, some perils are excluded because they do not meet the criteria for insurable risks. For example, perils that result in catastrophic losses are generally infrequent, high-impact events that are difficult to predict and measure. The chances of these events occurring are difficult to calculate, and the losses can be so large and simultaneously impact so many homeowners that they could jeopardize a company’s solvency because the insurer may not have sufficient capital to pay out the large number of claims. Examples of these types of risks include flood, earthquake, war, and nuclear hazard. Other risks are excluded because they could raise moral hazard issues if they were covered and because they are not accidental. Moral hazard is an increase in the probability of loss caused by the policyholder’s behavior—for example, intentional loss, neglect, deterioration, and lack of maintenance. According to some industry participants, if these risks were covered, homeowners could, for example, neglect their roofs or not address mold problems and then file claims for replacement and remediation. Further they noted that these types of risks also fall outside the realm of insurance because they are not sudden or accidental and are generally unpredictable. Some industry participants also said that homeowners policies list defective products as exclusions for a couple of reasons. First, they said that defective products are listed as exclusions because insurers would find it difficult and impractical to evaluate the wide range of manufactured products to determine the likelihood and extent of defects and price this risk for policies accordingly. Second, they suggested that manufacturers are responsible for defects in their products, and product warranties and commercial general liability insurance can help affected homeowners. For example, some homeowners affected by defective drywall may receive compensation from settlements partly funded by commercial general liability insurance policies held by companies responsible for the distribution or installation of defective drywall. Additionally, some homeowners impacted by defective drywall have filed insurance claims through their homeowners’ insurance policies, and litigation in multiple states will determine the extent to which coverage for this drywall is covered by homeowners insurance. Certain provisions found in homeowners policies can also affect coverage. Some industry participants cited as an example anticoncurrent causation clauses. For example, under these clauses, if damage is simultaneously caused by both an excluded peril (such as flood) and a covered peril (wind), coverage is excluded for that loss. They noted that these clauses presented a particular challenge for owners of coastal properties when hurricanes occurred. Cases challenging the enforceability of the anticoncurrent clause arose after the 2005 hurricane season because damage to properties may have resulted from a combination of high winds and flooding. According to reports, this clause may similarly become an issue for homeowners in the aftermath of Superstorm Sandy. In addition to the anticoncurrent causation clause, policies may also contain conditions that policyholders are required to comply with in order to maintain coverage. Under the HO-3 policy, for example, homeowners must provide prompt notice to their insurance company or agent following a loss and are also required to protect property from further damage once a loss has occurred. Failure to comply with these types of requirements may result in a loss of coverage. In addition, the facts and conditions surrounding the loss event—including the cause of the damages and court decisions—may also impact coverage. For example, according to some industry participants, an HO-3 policy typically excludes mold damage. However, mold damage that is hidden within walls or ceilings, or beneath the floors of a structure, that results from the accidental discharge or overflow of water or steam from within a plumbing or heating system may be covered, among other things. Further, the HO-3 policy typically covers theft of personal property from a home. However, theft in a home that is under construction, for example, may not be covered. Court decisions may also impact coverage when disputes between policyholders and insurers end up in litigation, something that can take time to resolve. Ultimately, whether a loss is covered in these cases may depend on how the court interprets the policy. Following Hurricane Katrina, for example, some coverage disputes raised the question of whether a policy’s flood exclusion language clearly excluded a water-related event, such as storm surge, that caused the damage at issue. Coverage may also depend on where homeowners live and whether they have purchased endorsements—optional coverage that alters a policy’s terms and conditions that can be added at an additional cost—or have other insurance policies. For example, homeowners policies typically do not cover flood damage, but flood insurance may be available through NFIP, depending on where a home is located. Additionally, private insurers sometimes exclude coverage for wind-related damage to properties in coastal areas, requiring policyholders to either pay an additional premium for wind-related risks or purchase a supplemental policy for wind-related damages. In such cases, this supplemental coverage is typically provided by a state-sponsored wind insurance pool that has been created to address shortages in the availability of private insurance for wind-related risks. In Florida, for example, coverage is provided through its state insurance entity, the Citizens Property Insurance Corporation (Citizens). Earthquakes and sinkholes are two other perils that are also typically excluded from homeowners policies but that may be covered by a separate policy or an endorsement in some states. In California, for example, state law requires all residential property insurance companies to offer earthquake coverage to homeowners. In offering earthquake coverage, insurance companies can manage the risk themselves, contract with an affiliated or non-affiliated insurer, or become members of the California Earthquake Authority (CEA), a publicly-managed but privately funded entity that offers residential earthquake policies. Similarly, sinkhole coverage is typically excluded under the earth movement exclusion found in standard homeowners policies, but in Florida, the state requires insurers to offer catastrophic ground cover collapse, which is a narrow form of coverage that protects against the total loss of a home due to sinkholes. Florida does not require any other sinkhole coverage to be included in homeowners policies, but insurers are required to offer additional sinkhole coverage. Homeowners in some states may also be able to purchase coverage through endorsements for other perils typically excluded from homeowners policies, including ordinance and law, sewer or drain backup, and mold. Losses due to policy exclusions can financially impact homeowners by causing significant out-of-pocket expenses. Communities can also be impacted by perils excluded from homeowners policies, particularly if unrepaired homes result in blight and affect whether others in a neighborhood rebuild. Expanded coverage could offer homeowners more protection, potentially reducing the cost of repairs that homeowners would have to cover with their own resources. However, policies that offered expanded coverage would likely be much more expensive than current policies. According to industry participants, the biggest and most significant impact on homeowners from policy exclusions is out-of-pocket costs to cover losses. If a loss occurs that is excluded by insurers, and coverage is not available through an endorsement or federal or state program, homeowners will have to use their own resources to rebuild and replace what they had. Some industry participants said that excluded losses can cause significant damage to homes and possessions. In some cases, homeowners may not have the means to rebuild after a disaster. Figures 1 and 2 illustrate the damage caused by two perils, floods and sinkholes, which are typically excluded from homeowners policies. In addition to paying for excluded losses on their own, one industry organization said that homeowners may have to pay additional out-of- pocket expenses for temporary housing or a car rental that may not be covered by a homeowners policy. When impacted by a disaster, homeowners may need these types of rentals for many months, and they can be costly. Further, industry participants said that rebuilding after a disaster can present additional challenges for homeowners. One industry organization said that catastrophes can cause shortages of building contractors and building supplies that can delay reconstruction. This phenomenon is known as “demand surge.” In these circumstances, the short-term costs of repairing and rebuilding homes can escalate substantially. Another industry participant noted that homeowners can also have their policies cancelled if home repairs are not adequate. Disasters and excluded perils also sometimes highlight differences between consumers’ expectations for insurance and actual policy coverage, resulting in added frustrations for homeowners. According to some industry participants with whom we spoke, some consumers may not learn what their policy actually covers until losses occur. For example, one industry association recalled that some consumers who had recent problems with drywall sourced from China filed insurance claims through their homeowners policies. As noted earlier, litigation in multiple states may determine the extent to which homeowners insurance applies to losses associated with this drywall. Others may not understand their coverage well enough to know what is covered, what is excluded, and what loss events and circumstances might result in paid, partially paid, or denied claims. For example, according to a 2013 survey conducted by one industry organization, many homeowners mistakenly believe that their homeowners policies cover flooding from a hurricane. According to some industry participants, homeowners policies can be difficult to understand because they often are long, technical, legal documents. One consumer advocate we spoke with suggested that few tools exist to help consumers understand their coverage, and noted that even though some states have policy readability requirements and disclosure rules about coverage and exclusions, these measures do not help consumers understand their policies. Other participants said that many homeowners do not read their policies or study them closely. Additional analysis would be needed to determine to what extent readability requirements, disclosure rules, or other factors impact homeowners’ ability to understand their policies. In addition to the devastation disasters may cause homeowners, neighborhoods and communities can also be impacted if some homeowners are unable to rebuild because they do not have coverage for a loss. Damaged homes that are not rebuilt can result in blight and affect the willingness of others to rebuild. A neighborhood where property is vacant or deteriorated will also likely impact the property value of surrounding homes. In addition, when some homeowners do not rebuild, communities may experience a diminution of the tax base. Collecting less in property taxes can impact the ability of communities to fund schools, libraries, parks, and roads, among other uses. Both the federal and some state governments offer disaster assistance and other programs to help homeowners with some of the perils that private insurers do not cover, and this support can rely on taxpayer as well as policyholder resources. At the federal level, the government provides a range of assistance to individuals after major disasters. This assistance is generally made available after the President issues a disaster declaration under the authority of the Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act), and is administered by various federal agencies through various programs. FEMA, for example, provides disaster relief and recovery assistance to individual citizens through its Individuals and Households Program (IHP), which is intended to provide money and services, including assistance in repairing and replacing damaged homes, to people in a disaster area when losses are not generally covered by insurance. The growing number of major disaster declarations has contributed to an increase in federal expenditures for disaster assistance, however. For example, through January 31, 2012, FEMA obligated $80.3 billion in disaster relief, including $23.5 billion in individual assistance, for 539 disasters declared during fiscal years 2004 through 2011. More recently, Superstorm Sandy has also involved significant federal disaster assistance. In January 2013, Congress passed and the President signed the Disaster Relief Appropriations Act of 2013 and the Sandy Recovery Improvement Act of 2013, which provided about $50 billion in federal assistance to support rebuilding efforts. In addition to providing disaster assistance, the federal government also offers flood insurance to homeowners through NFIP, a program that involves significant costs for the government and ultimately for taxpayers. According to FEMA information, as of September 2013, there were 5.6 million flood insurance policies in force in almost 22,000 communities across the United States. In years when losses were high, NFIP has used statutory authority to borrow funds from the Department of the Treasury (Treasury) to pay claims and keep the program solvent. For example, NFIP borrowed $16.8 billion from Treasury to cover claims for the 2005 hurricanes—primarily Hurricane Katrina—and received additional borrowing authority in the amount of $9.7 billion following Superstorm Sandy in 2012. As of October 2013, NFIP owed Treasury $24 billion. NFIP is generally expected to cover its claim payments and operating expenses with the premiums it collects. However, NFIP has sold some flood insurance policies at subsidized rates to help keep flood insurance affordable, and these subsidized policies have been a financial burden on the program because of their relatively high losses and premium rates that are not actuarially based. As a result, the annual amount that NFIP collects in both full-risk and subsidized premiums is generally not enough to cover its operating costs, claim payments, and principal and interest payments for the debt owed to Treasury, especially in years of catastrophic flooding, such as 2005. This arrangement results in much of the financial risk of flooding being transferred to the federal government and ultimately the taxpayer. As shown in figure 3 below, NFIP has handled a significant number of claims and paid losses for flood events caused by hurricanes and a superstorm since 2005. Some states, such as Florida, have created residual insurance pools to cover what the private market will not, sometimes at a cost to policyholders and taxpayers across the state. In Florida, Citizens provides homeowners coverage, including for wind damage, for those who cannot find coverage from private insurers. If Citizens’ funds are depleted by paying claims after a catastrophic event, Florida law requires that Citizens charge assessments until any deficits are eliminated. Citizens’ policyholders are the first to be assessed, and if necessary, additional assessments can be levied against certain private insurance companies, who can then pass the cost of these assessments on to their policyholders. This ability to levy assessments provides Citizens with resources to pay claims to policyholders. Storms in 2004 and 2005, for example, resulted in more than $30 billion in insured damage in Florida. Citizens sustained deficits of $515 million and $1.8 billion, respectively, in those years. To fund its 2004 deficit, Citizens assessed insurance companies and surplus lines policyholders over $515 million in regular assessments. To fund the 2005 deficit, the Florida legislature appropriated $715 million from the Florida general revenue fund, reducing the size of the regular assessment from $878 million to $163 million. In 2005, the Florida legislature also directed Citizens to amortize the collection of the emergency assessment for the remaining $888 million deficit over a 10-year period, resulting in an emergency assessment levied beginning in June 2007. According to two industry organizations, Florida’s property/casualty policyholders generally bear the cost of having Citizens provide coverage to participating Florida residents, regardless of whether they live inland or on the coast. If private insurers offered expanded homeowners coverage, homeowners could see a number of benefits. First, according to some industry participants homeowners would have more protection because more perils would be covered under their policy. Second, some industry participants noted that more comprehensive coverage could lead to less ambiguity for consumers seeking to understand their policies. Policies with fewer exclusions and conditions for coverage would be simpler and more efficient for insurers to write and consumers to understand than having separate policies for homeowners, flood, and in some states, wind coverage. Third, some industry participants said that enhanced coverage could reduce litigation and disputes between policyholders and insurers over coverage. For example, a policy that offered more comprehensive coverage could reduce disputes over the anticoncurrent causation clause, often over whether wind or water caused residential damage, something some industry participants noted is a particular challenge for homeowners when hurricanes occur. Assuming homeowners purchased expanded private coverage without government subsidies, these policies could also reduce reliance on federal and state programs. According to some industry participants, if homeowners policies covered flooding, for example, fewer taxpayer resources may be needed for paying NFIP’s claims and subsidized premiums. In addition to having benefits at the federal level, greater private coverage could also reduce or eliminate the need for state-based insurance mechanisms, such as state wind pool coverage, and the insurer and policyholder assessments they can involve. Moreover, expanded coverage could encourage more informed decisions by homeowners. For instance, expanded coverage with more accurate pricing for risk could provide beneficial information to consumers on the risk associated with their housing location decisions and could encourage more consumers to mitigate risks to their property or to not locate to high- risk areas. Notwithstanding these potential benefits, expanded coverage would likely increase insurance premiums for homeowners. According to some industry participants with whom we spoke, the increase could amount to several times what some homeowners currently pay. For example, one industry organization said that a policy that covered flood, earthquake, sinkholes, and some other perils could cost homeowners anywhere from three to five times what homeowners pay now, while another estimated that the cost of a policy that covered nearly all the currently excluded perils could exceed $15,000 annually. Other industry participants said the cost for this coverage would be so high that many homeowners would be unable to afford it. Additionally, FEMA information on the changes to the cost of flood insurance following the Biggert-Waters Act and the elimination of some subsidized rates may further illustrate how costly expanded coverage could potentially be for some homeowners. According to FEMA information, the cost of a NFIP policy for a home located in a very high-risk area without a subsidy could exceed $20,000 in annual premiums for certain policyholders. Several factors make it challenging for private insurers to offer all-perils homeowners insurance, or even more comprehensive policies. These include consumer demand for greater coverage and the higher premiums such coverage would involve, the ability of insurers to adequately price policies that covered more perils, and the regulatory challenges associated with getting approval for risk-based rates. A few industry participants with whom we spoke said that insurers and others are discussing possibilities for expanding private homeowners insurance, but cautioned that policy premium rate, affordability, and other conditions would need to be addressed. Higher premiums for more comprehensive homeowners insurance are not only an affordability challenge for homeowners, they also represent a key challenge for insurers. Some industry participants said that the higher premiums required for more comprehensive coverage raises questions about whether sufficient demand would make expanded coverage impractical in the private market. Some said that many homeowners try to keep expenses for insurance as low as possible, citing as evidence low participation in NFIP, despite federal subsidies. They also questioned whether consumers would buy much more expensive expanded coverage even if it were offered by insurers. Additionally, many industry participants with whom we spoke said that adverse selection—or the tendency for those who live in places most prone to risk to be most likely to purchase insurance—could challenge insurers’ ability to expand policy coverage. Insurers manage risk by charging appropriate rates and diversifying their risk pool. Industry participants with whom we spoke said that if only riskier households—for example, those located near the coasts or rivers—were the primary purchasers of expanded coverage, insurers might end up with an insurance pool with concentrated risk and policies that could cause losses that may jeopardize insurers’ profitability and solvency. A requirement that insurers offer and homeowners purchase more comprehensive coverage may reduce this problem but raises questions about how such a requirement would be implemented. Further, some questioned whether it would be fair to require those living in low-risk areas to purchase expanded coverage they may not need, in effect subsidizing those living in high-risk areas. One industry organization said that legislative or regulatory attempts to mandate all-perils coverage could destabilize the insurance marketplace in certain high-risk areas such as coastal regions and floodplains. It could also cause private insurers to further limit their exposures in disaster-prone areas, and some insurers may withdraw from the market altogether. Industry participants suggested that offering policies that covered all losses could also raise issues of moral hazard, or incentivize risky behavior by homeowners. For example, comprehensive policies may encourage people to locate their homes in high-risk areas. Others said that higher premiums associated with more comprehensive policies would send a better signal to homeowners about the risk associated with their housing location, something that could prompt homeowners to properly insure their homes or take steps to mitigate their risk. Industry participants said that another important challenge is the difficulty of pricing catastrophic risks and handling the claims that they cause. Some industry participants said that accurately modeling the broader range of risks that more comprehensive policies would cover was critically important. Having loss data and accurately modeling risk is necessary for appropriately pricing insurance policies and for ensuring insurer solvency. Some industry participants said that because expanded coverage would be new to the private domestic market, modeling experience would need to be developed over time and could be challenging, particularly for multiple catastrophic losses. Others said that insurers might also lack the expertise to handle claims for perils that were typically excluded, and that it could take time for insurers and adjusters to develop the expertise to handle some disaster situations and subsequent claims. Industry participants said that insurers could also face critical regulatory challenges in offering more comprehensive coverage. One important challenge is that state regulatory approval for the higher premiums more comprehensive coverage would likely demand is uncertain. Insurers need to charge risk-based rates that are determined on an actuarial basis in order to stay solvent and meet their policy obligations to homeowners. However, some industry participants with whom we spoke said that getting regulators to approve rates that insurers determined would be appropriate for certain risks has been difficult and that getting approval for rates that could be several times more expensive than those currently in force would be an important challenge. One regulator, however, said that inability to charge higher risk-based rates might not be an issue because loss experience is a critical factor that drives rates. If insurers faced greater losses by covering more perils, they would likely be able to justify and gain approval for higher premiums. Industry participants also said that different state insurance laws and regulations and different rate- setting and approval processes could make it difficult for insurers to sell more comprehensive policies with risk-based rates across states. Offering coverage for a broader set of perils would also require insurers to have the capital necessary to pay claims without risking insolvency. The greater the risk, the more capital insurers need to hold. Insurers may not be willing to maintain the higher capital levels needed for insuring against higher risk events if that capital could be used for other insurance or investment purposes. In addition, disasters such as floods and earthquakes are relatively infrequent but often severe events, so that insurers cannot always know how much they will need in reserves. Industry participants said that the unpredictability of catastrophes could prevent insurers from accurately calculating and setting aside the sums necessary to cover losses. One insurance regulator suggested that even if insurers could charge rates that reflected the full risk of disasters, they still may not be able to offer coverage for additional perils. For additional coverage to be possible, insurers would need the ability to conduct actuarial analyses and accurately model risks involved with greater homeowners coverage. Some industry participants thought this capability may already exist or could be developed for floods and earthquakes, the two perils they said hold some promise for greater private insurer involvement. In order to offer coverage for flood or other perils, insurers would have to be able to charge risk-based rates, a critical part of meeting their policy obligations to consumers and staying solvent, but something that would also raise concerns about higher premium costs, policy affordability, and consumer demand. Two industry organizations said that mitigation efforts, effective building codes, and sound land-use policies could also help reduce risks from natural catastrophes. Another highlighted how building codes set by states can differ, which can lead to inconsistencies that make it difficult to ensure properties can withstand loss events. Others said that it is important for the insurance industry to encourage consumers to become better informed about their risks and insurance so that they could take available steps depending on where they live to mitigate losses. The catastrophic nature of flood and other natural catastrophe losses, according to industry participants, may require a continuing role for federal and state government in financing coverage. We recently reported on strategies to encourage greater private-sector involvement in flood insurance that included the possibility of insurers charging homeowners full risk rates with the government providing targeted subsidies to help with affordability. Private insurers could play a greater role in coverage with the federal government possibly serving as an insurer for only the highest risk properties. Yet another option is the combination of greater private-sector involvement and the government acting as a reinsurer by providing a backstop to private insurers for losses over a certain amount. The size of the losses and the magnitude of the risk associated with more comprehensive policies underscore the complex challenges of addressing the costs of catastrophes and other perils that place homeowners’ properties at risk. A mix of factors—financial risk, large potential losses, political and regulatory issues, policy affordability, and consumer demand—has thus far made it challenging for private-sector insurers in the U.S. to offer flood insurance to homeowners, let alone more comprehensive or all-perils policies. The possibility of improved data, better risk modeling, and emerging private-sector interest, however, suggest that some additional coverage may be possible. For this to happen, private insurers must be able to assess and diversify risk and charge rates adequate for the risk they are assuming. At the same time, consumers will need to better recognize the risk and cost of their housing decisions and the likely higher rates that come with protecting homes and possessions in certain locations. One of the most fundamental challenges is achieving a policy premium rate that allows insurers to stay solvent and meet their obligations to consumers, yet is affordable enough so that consumers are willing and able to buy insurance. Addressing this important challenge and ensuring a collective response to losses caused by disasters and other perils will require the cooperation and resources of government, homeowners, and insurers, as well as balance in the assumption of risk and cost by each of these parties. We provided a draft of this report for review and comment to the National Association of Insurance Commissioners (NAIC) and the Federal Insurance Office (FIO) at the Department of the Treasury. Both provided technical comments which we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, NAIC, and FIO. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. In this report we examined (1) the perils homeowners policies typically cover and exclude; (2) the impacts of exclusions on homeowners and taxpayers and the potential benefits of more comprehensive coverage for homeowners; and (3) the additional perils insurers might be willing to cover and the challenges associated with such coverage. To determine what perils homeowners policies typically cover and exclude, we analyzed the Insurance Services Office’s (ISO) standard homeowners insurance policy (HO-3) and examples of private insurers’ homeowners policies. We reviewed documents by insurance industry organizations and professional associations, including the National Association of Insurance Commissioners (NAIC) and the Insurance Information Institute (III). We also interviewed a nongeneralizable, judgmental sample of property/casualty insurance companies and state insurance regulators, insurance and reinsurance associations, an insurance agent and broker association, consumer groups, academic insurance and risk experts, and the Federal Insurance Office (FIO) at the Department of the Treasury. We selected our sample of insurers based on market share by direct premiums underwritten and participation in different geographic markets. We selected our sample of state regulators based on geographic diversity and experience overseeing insurers with portfolios of different perils, including floods, hurricanes, and earthquakes. To determine the impacts of exclusions on homeowners and taxpayers and the benefits of more comprehensive coverage, we spoke with consumer groups, including the Center for Economic Justice, academic experts, selected state insurance regulators, NAIC and FIO officials, and other insurance association officials. To illustrate some of the financial impacts of policy exclusions on taxpayers, we obtained publicly available data from the Federal Emergency Management Agency (FEMA) on the National Flood Insurance Program’s (NFIP) claims costs. For the NFIP data we used, we interviewed officials on usability and reliability. We determined that these data were sufficiently reliable for our intended purposes. In addition, we reviewed our previous work on natural catastrophe insurance, academic and other studies, and results from an annual survey conducted by Insurance Information Institute (III) on homeowners insurance, flood insurance, and disaster preparedness. To identify the additional perils that insurers might be willing to insure and the challenges associated with such coverage, we spoke with a sample of insurance companies and state insurance regulators, insurance and reinsurance associations, an insurance agent and broker association, NAIC and FIO officials, consumer groups, academia, and others. We also reviewed our previous work on natural catastrophe insurance, Congressional Research Service (CRS) reviews, and academic and other studies on these issues. We gathered additional perspectives on all perils policies from a round table discussion on privatizing flood insurance that we organized and conducted at GAO headquarters in Washington, D.C. Participants in the round table included insurance industry association representatives, select state regulators, and NAIC and FEMA officials. We conducted this performance audit from December 2012 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Established in 1968, the National Flood Insurance Program (NFIP) makes federally backed flood insurance available to residential property owners and to owners of nonresidential property such as businesses, churches, governments and nonprofits. Under NFIP, the federal government generally assumes the liability for the insurance coverage and sets rates and coverage limitations, among other responsibilities, while the private insurance industry sells the policies and administers the claims. The Federal Emergency Management Agency (FEMA) is responsible for administering NFIP. Community participation in NFIP is voluntary. However, communities must join NFIP and adopt FEMA-approved building standards and floodplain management strategies in order for their residents to purchase flood insurance through the program. Additionally, communities with Special Flood Hazard Areas (SFHA)—areas at high risk for flooding— must participate in NFIP to be eligible for any form of assistance for acquisition or construction purposes in connection with a flood. Participating communities can receive credits on premium rates on flood insurance if they establish floodplain management programs that go beyond the minimum requirements of NFIP. FEMA can suspend communities that do not comply with the program, and communities can withdraw from the program. As of September 2013, almost 22,000 communities voluntarily participated in NFIP. Consumers can purchase flood insurance to cover both buildings and contents for residential and commercial properties. NFIP’s maximum coverage for residential policyholders is $250,000 for building property and $100,000 for contents. This coverage includes replacement value of the building and its foundation, electrical and plumbing systems, central air and heating, furnaces and water heater, and equipment considered part of the overall structure of the building. Personal property coverage includes clothing, furniture, and portable electronic equipment. For commercial policyholders, the maximum coverage is $500,000 per unit for buildings and $500,000 for contents (for items similar to those covered under residential policies). Coverage for personal property and coverage for nonresidential buildings is written on an actual cash value basis. NFIP offers two types of flood insurance premiums to property owners who live in participating communities: subsidized and full-risk. The National Flood Insurance Act of 1968 authorized NFIP to offer subsidized premiums to owners of certain properties. Congress originally mandated the use of subsidized premiums to encourage communities to join the program and mitigate concerns that charging rates that fully and accurately reflected flood risk would be a burden to some property owners. According to FEMA, Congress made changes to the program over the years to encourage further participation in NFIP through low premiums. FEMA estimated that in 2012 more than 1 million of its residential flood insurance policies—about 20 percent—were sold at subsidized rates; nearly all were located in high-risk flood areas. Since 2000, NFIP has experienced several years with catastrophic losses—losses exceeding $1 billion—and has needed to borrow money from the Treasury to cover claims in some years. The losses resulting from Superstorm Sandy, which caused extensive damage in several states on the eastern coast of the United States in October 2012, also are catastrophic, reaching over $7 billion. As of October 2013, FEMA owed Treasury $24 billion. As a result of the program’s importance, level of indebtedness to Treasury, substantial financial exposure for the federal government and taxpayers, and FEMA’s management challenges, NFIP has been on our high-risk list since 2006. Established in 2002, Florida Citizens Property Insurance Corporation (Citizens) is a not-for-profit and tax-exempt government entity that provides property insurance to homes and businesses that cannot get coverage in the private sector. It consolidated two residual market mechanisms: the Florida Windstorm Underwriting Association (FWUA), created in 1970 to provide high-risk, windstorm, and hail residual market coverage in select areas of Florida, and the Florida Residential Property and Casualty Joint Underwriting Association (JUA), created in December 1992 following Hurricane Andrew to provide residual market residential- property multiperil insurance coverage, excluding wind if the property was within FWUA-eligible areas. A primary driver for the merger was that the combined entity obtained federally tax-exempt status, allowing it to save federal income taxes that otherwise would have been paid by FWUA and JUA. As an integral part of the state rather than a private insurance company, Citizens is also able to issue tax-exempt post-event bonds and taxable pre-event bonds, which can help finance loss payments in the event of a major disaster. Florida law determines the standards Citizens uses to establish its premium rates. Citizens’ rates are required to be actuarially sound, but at the beginning of 2007, an approved rate increase was rescinded and rate levels were frozen by the Florida legislature at the 2006 rate levels. The rate freeze remained in effect through December 31, 2009. On January 1, 2010, Citizens began implementation of a statutorily required path to achieve actuarially sound rates over time. Except for sinkhole coverage, the path limits annual rate increases to 10 percent for any single policy issued by the corporation, excluding coverage changes and surcharges. According to Citizens officials, Citizens’ rates are moving towards actuarial soundness, but are not yet there. Citizens allocates approximately 18 percent of every premium dollar it collects to pay hurricane and catastrophe claims, but in the event that losses exceed its surplus, it is required by statute to levy assessments to recover the deficit. Assessments can be charged in up to three tiers, policyholder surcharge, regular, and emergency. Each additional tier is charged only if the level before is insufficient to eliminate Citizens’ deficit. Citizens’ policyholder surcharge is the first tier of assessments and can be levied one time for up to 45 percent of the policyholder’s premium in a single year. If a deficit remains in one of Citizens’ three types of accounts, Citizens can levy regular assessments of up to two percent against certain private insurance companies, who can then pass the cost of these assessments on to their policyholders. Finally, if a deficit persists, Citizens can impose an emergency assessment on both Citizens and non-Citizens policyholders. This ability to levy assessments provides Citizens with resources to pay claims to policyholders. For example, following the 2004 storms, Citizens had to levy over $515 million in regular assessments to fund its deficit. Citizens’ resources also come from its reinsurance arrangement with the Florida Hurricane Catastrophe Fund (FHCF). Established in 1993 by the Florida legislature, FHCF is a state-run reinsurer created to provide additional insurance capacity and help stabilize the property insurance market by reimbursing insurers for a portion of their catastrophic hurricane losses. As a tax-exempt entity, FHCF can accumulate premium payments on a tax-free basis. If the revenue generated from premiums is insufficient following a loss event, FHCF, like Citizens, is required by state law to levy assessments on a broad base of property/casualty insurance lines to fund revenue bonds to pay the losses. For example, FHCF issued bonds in the amount of $1.35 billion in 2006 and $625 million in 2008, which are being financed by a 1 percent assessment levied on property/casualty insurers in the state. As of October 2013, Citizens had more than 1.2 million policies in force, most of which were homeowners policies. According to a Citizens official, Citizens has recently engaged in depopulation efforts by seeking ways to return policies to the private market, but these efforts have been met with challenges. For example, Citizens has faced challenges with establishing rates higher than those available in the private market. The insurance distribution process is also a challenge. For example, according to a Citizens official, private insurers that decline to sell a homeowner a policy may refer that homeowner to Citizens instead of recommending they shop the market for coverage, which has impeded a freely competitive insurance market. Established in 1996, the California Earthquake Authority (CEA) is an instrumentality of the state that sells earthquake insurance policies for residential property throughout California. CEA is a publicly managed, privately funded entity. After the Northridge Earthquake that struck the San Fernando Valley in January 1994, insurers in California began to limit their exposure to earthquakes by writing fewer or no new homeowners insurance policies. In 1995, California lawmakers passed a bill that allowed insurers to offer a reduced-coverage earthquake insurance policy. In offering earthquake coverage, insurance companies can manage the risk themselves, contract with an affiliated or non-affiliated insurer, or become a CEA-participating insurance company and offer CEA’s residential earthquake policies. CEA is the largest earthquake insurer in California, with approximately 840,000 policies in force as of 2011, which represent approximately 70 percent of the residential earthquake insurance policies in the state. CEA offers a basic residential earthquake policy to homeowners, which includes coverage for the insured dwelling and limited coverage for contents and loss-of-use if the residence is uninhabitable due to an earthquake. For an additional premium, CEA policyholders can significantly increase insured limits on contents and for loss-of-use, and homeowners can lower their CEA policy deductible from 15 percent to 10 percent. CEA coverage is available to homeowners only from the insurance company that provides their residential property insurance and only if that company is a CEA-participating insurance company. Participating insurance companies process all CEA policy applications, policy renewals, invoices, and payments and handle all CEA claims. In determining premium rates, CEA is required by law to use the best science available and is permitted by law to use earthquake computer modeling, to establish actuarially sound rates. CEA will examine rating factors, such as the rating territory (determined by ZIP code), age, and type of construction of a home, in determining the premium rate. The CEA governing board establishes premium rates, subject to the prior approval of the Insurance Commissioner. In 2011, for example, a request for a 12.5 percent average statewide rate decrease was approved beginning with new and renewal policies that became effective on and after January 1, 2012. The change was a result of a reduction in the estimated average annual loss, as derived from new scientific information, according to CEA information. Given that the rate decrease is expressed as an average statewide rate impact, individual policyholders may have seen their rates increase or decrease, depending on CEA product, location of the risk, and other rating factors. CEA is funded principally from policyholder premiums, contributions from and assessments on participating insurers, returns on invested funds, borrowed funds, and reinsurance. Assessments on participating insurers may not be directly passed through to policyholders. CEA is authorized to issue bonds, and may not cease to exist so long as its bonds are outstanding. As of 2012, CEA had approximately $10.2 billion in claims- paying capacity, but if an earthquake causes insured damage greater than CEA’s claims-paying capacity, then policyholders affected will be paid a prorated portion of their covered losses or may be paid in installments. In addition to the contact named above, Paul Schmidt (Assistant Director); Emily Chalmers; Alma Laris; Marc Molino; Erika Navarro; Steve Ruszczyk; Jessica Sandler; and Andrew Stavisky made key contributions to this report.
|
Homeowners insurance protects against a range of perils, but policies do not insure against all risks. Owners whose homes are damaged by natural and other disasters not covered by their insurance can be exposed to serious financial losses. Federal and state initiatives provide some assistance for catastrophes, which can involve significant taxpayer expense. With coastal populations growing and the possibility of more frequent and severe weather, more homeowners could experience heavy losses not covered by homeowners insurance, putting increasing financial pressure on government programs and thus on taxpayers. GAO was asked to study the possibility of private insurers providing more comprehensive insurance. This report addresses (1) what perils homeowners policies typically cover and exclude, (2) how exclusions impact homeowners and taxpayers and the potential benefits of more comprehensive coverage, and (3) what additional perils insurers might be willing to cover and what challenges are associated with expanding policies. GAO reviewed homeowners insurance policies and conducted interviews with the National Association of Insurance Commissioners, other industry organizations, consumer advocates, and risk experts, among others. GAO requested comments on a draft of this report from the Federal Insurance Office and the National Association of Insurance Commissioners. Both provided technical comments which we incorporated into the report as appropriate. Homeowners insurance policies typically protect homes, garages and other structures, and personal belongings from damage caused by perils such as fire, hail, lightning, explosion, and theft, among others. The insurance industry considers these perils insurable because they are accidental, predictable, and do not involve catastrophic losses. These policies also typically exclude losses from a number of perils, including disasters caused by floods, earthquakes, and war. Industry officials said that such events are difficult to predict and involve extensive losses that are a challenge for private insurers to cover. Insurers also exclude losses from defective products, which industry participants said could be addressed by manufacturer warranties and commercial general liability insurance. Intentional losses; damage from wear, tear, or neglect; and losses caused simultaneously by covered and uncovered perils, such as wind (covered) and flood (uncovered) during a hurricane are also generally excluded. Policy exclusions can impact homeowners, communities, and state and federal governments. When excluded losses occur, they can create significant costs for homeowners to repair homes and replace possessions. Wide-scale catastrophes can also cause shortages of building materials and contractors that delay reconstruction and substantially increase the costs of repairing homes. When damage to properties caused by excluded losses is not repaired, affected communities may experience blight and face reduced tax revenue. When federal and state governments have stepped in to cover what private insurers exclude, taxpayers may face a significant expense. In addition to federal disaster assistance, the National Flood Insurance Program (NFIP) paid more than $7 billion in claims after Superstorm Sandy. In Florida, insurers and policyholders can be assessed extra charges to help pay for state efforts to cover wind damage where it is not covered by insurers. Industry participants suggested that expanded private coverage could provide additional protection for homeowners and reduce reliance on government programs, but the resulting policy premiums would likely be prohibitively expensive for many homeowners. Multiple factors make expanding private coverage challenging and several conditions would need to be addressed for insurers to offer more comprehensive insurance. A main challenge is that expanded coverage would have higher costs, potentially limiting consumer demand. Even if insurers charged higher rates that were based on risk, the severity and unpredictability of catastrophic losses could still jeopardize insurers' solvency. Some industry participants said that insurers and others are discussing possibilities for expanding private homeowners coverage, with a focus on risk-based premiums, mitigation efforts, effective building codes, and sound land use policies. The challenging mix of financial risk, political and regulatory issues, policy cost, and consumer demand has thus far prevented private sector insurers in the U.S. from offering flood insurance to homeowners, let alone more comprehensive or all-perils policies. Because of this mix of factors, some in the insurance industry have suggested that a continuing financial role by federal and possibly state governments may be required, and that ensuring a response to the impact of disasters and other perils will require the cooperation and resources of government, homeowners, and insurers, as well as balance in the assumption of risk and cost by each of these parties.
|
PBMs administer the prescription drug part of health insurance plans on behalf of plan sponsors, such as self-insured employers, insurance companies, and health maintenance organizations (HMO). In 1989, PBMs managed prescription drug benefits for about 60 million people. In 1993, they managed drug benefits for about 100 million, or almost 40 percent of the U.S. population. Should this rate of growth continue, by the end of 1995 PBMs will provide services for health plans covering about 50 percent of the population. While the number of people covered by PBMs has increased significantly, the market for PBMs’ services continues to involve a small number of firms. Although there are over 40 PBMs in the United States, some estimates suggest that the 5 largest manage benefits for over 80 percent of the health plan enrollees covered by PBMs. They include PCS Health Systems, Medco, Value Rx, DPS, and Caremark International Inc.’s Prescription Service Division. All five PBMs were included in our study. A common technique PBMs use to manage pharmacy care is formulary development. A formulary is a list of prescription drugs, grouped by therapeutic class, that are preferred by a health plan sponsor. Drugs are included on a formulary not only for reasons of medical value but also on the basis of price. PBMs provide physicians and others with printed formularies that often use dollar sign designations to identify drugs according to their relative cost within a therapeutic class. For example, “$” can signify a low-cost product, while “$$$$” can signify a higher-cost product. Both the inclusion of a drug on a formulary and its cost designation can affect the utilization of a manufacturer’s products. PBMs and the health plan sponsors they represent encourage physicians to prescribe lower-cost formulary drugs over both nonformulary drugs and higher-cost formulary drugs for health plan enrollees. The extent to which the PBMs and their sponsors are successful in obtaining physician compliance with formularies can increase the sales and market share within a therapeutic class of a prescription drug, particularly for products on the formulary with the lowest cost designations. Because of this potential effect on the sales and market share of a drug, manufacturers offer PBMs rebates on drugs that face competition in return for both inclusion on a formulary and a low-cost designation. Because of the relationship between formularies and drug sales, FTC has reviewed the recent mergers on antitrust grounds to determine their potential impact on competition in the markets involved. Although FTC did not challenge mergers between Merck and Medco or SmithKline Beecham and DPS, it did challenge the merger that followed between Lilly and PCS Health Systems. FTC entered into a consent agreement with Lilly that established safeguards against the merger’s potential anticompetitive effects and also stated that it would continue to monitor the integration of drug manufacturers and PBMs. PBMs manage prescription drug coverage on behalf of health plan sponsors. Their objective is to provide high-quality pharmaceutical care at the lowest possible cost. PBMs are a relatively new type of firm that became a major market force only during the late 1980s. Their precursors were firms that provided prescription claims processing or mail-service pharmacy on behalf of insurers. While PBMs continue to provide these services, many provide additional services, such as formulary development and management, the development of pharmacy networks to serve health plan enrollees, negotiating drug rebates with manufacturers, generic substitution, and drug utilization review. Many PBMs are also developing products called “disease management” programs, which will attempt to provide the most cost-effective treatments for specific diseases. PBMs represent health plans and their enrollees in dealing with other participants in the prescription drug market. For example, a PBM negotiates with drug manufacturers to obtain rebates for a plan sponsor. PBMs also negotiate with retail pharmacies to obtain discounts on prescription drug prices and dispensing fees for health plan enrollees. In exchange for such services, a PBM may receive a percentage of manufacturer rebates or a fee per prescription. Figure 1 shows the typical network in which a PBM and other participants operate. PBMs we studied operate in networks that are structured similarly to the network shown in figure 1 and use several similar techniques to help control their customers’ drug costs. These techniques are applied in providing services related to formularies, pharmacy networks, claims administration, drug utilization review, and disease management. PBMs use formularies to help control drug costs by (1) encouraging the use of formulary drugs through compliance programs that inform physicians and enrollees about which drugs are on the formularies; (2) limiting the number of drugs a plan will cover; or (3) developing financial incentives to encourage the use of formulary products. Although PBMs develop formularies that they recommend to customers, health plan sponsors may work with them to develop customized formularies. In developing formularies, PBMs rely on pharmacy and therapeutic (P&T) committees, consisting of pharmacists and physicians, to analyze the safety, efficacy, and substitutability of prescription drugs. PBMs then rely on the recommendations of the P&T committee to determine the number of drugs to include on the formulary to give physicians a sufficient number of treatment options. Formularies can be open, incentive-based, or closed. Open formularies are often referred to as “voluntary” because enrollees are not penalized if their physicians prescribe nonformulary drugs. Thus, under an open formulary, a health plan sponsor provides coverage for both formulary and nonformulary drugs. Unlike an open formulary, an incentive-based formulary provides enrollees financial benefits if their physicians prescribe formulary drugs. Under this arrangement, the health plan sponsor still reimburses enrollees for nonformulary drugs but requires them to make higher co-payments than for formulary drugs. A closed formulary takes these financial incentives one step further by limiting coverage to formulary drugs only. Therefore, if an enrollee’s physician prescribes a nonformulary drug, the enrollee may have to pay the full cost of that prescription. However, the health plans cover nonformulary products when physicians determine that they are medically necessary for their patients. PBMs we studied reported that the vast majority of formularies they manage are open. For example, Medco officials told us that of the more than 2,000 plans Medco represents, only 4 of the plans (comprising just 3 percent of the enrollees covered by Medco) have adopted either an incentive-based or closed formulary. In another example, DPS officials determined that of about 90 formularies DPS manages (mainly for HMOs), about one-third are incentive-based or closed. However, officials of these PBMs expect that a greater number of health plan sponsors will adopt incentive-based and closed formularies in the future because of their potential to help reduce a plan’s drug costs. Incentive-based and closed formularies increase competition among drug manufacturers with competing drugs to get their drugs on PBMs’ formularies. PBMs also contract with networks of pharmacies to obtain discounts per prescription for the health plan enrollees PBMs represent. For each prescription, a PBM typically reimburses participating pharmacies according to a formula based on a drug’s average wholesale price (AWP) less a percentage, plus a dispensing fee. PBMs also encourage pharmacies to support other cost-reduction techniques, such as substituting a generic for a name brand when appropriate. Pharmacies accept set levels of reimbursement and other PBM cost-reduction techniques in order to attract or retain the potential customer base represented by a PBM’s millions of enrollees. In addition, PBMs we studied can reduce their customers’ administrative costs by using on-line computerization to verify claims and process payments. This is highly efficient compared with methods that rely on mailed-in claims. PBMs provide their customers’ enrollees with magnetically encoded cards that a pharmacist uses to confirm their health plan membership and to access the PBM screen on the pharmacy’s computer terminal. This screen lists the drugs on a plan’s formulary, any requirements for enrollee co-payments, and allows the pharmacist to request payment on-line from the PBM after dispensing a prescription. PBMs we studied also conduct retrospective and prospective drug utilization review (DUR) both to enhance the quality of pharmaceutical care and to potentially generate savings. Under retrospective review, PBMs study the drug utilization statistics of a customer’s enrollees to identify any instances in which physicians prescribed potentially inappropriate medications. If PBMs identify inappropriate patterns of prescribing or consumption, they will attempt to contact and educate physicians about more appropriate and potentially cost-effective treatments. Under prospective review, PBMs use a computer link with network pharmacists to review each prescription before it is dispensed. Prospective DUR helps PBMs to identify whether there is a generic or formulary alternative to the prescribed drug and whether the drug will duplicate an existing prescription or will adversely interact with other drugs the patient is using. If a nonrecommended, redundant, or potentially harmful drug is identified, the pharmacist is notified on the computer screen. PBMs we studied are working to add physicians to this on-line network to help reduce prescribing errors by communicating DUR results, as well as patients’ medical histories, as care decisions are being made. PBMs we studied also plan to help contain spending for chronic conditions, such as asthma and diabetes, by developing “disease management” programs to manage the care of enrollees with these illnesses. To develop these programs, PBMs are evaluating various treatment options, or therapies, discussed in existing medical research to identify those that are associated with better therapy management as well as low overall spending. PBMs then intend to educate both health plan enrollees and their physicians about these more cost-effective treatments and to monitor the degree of their compliance with related protocols over time. For example, officials of one PBM explained that when an enrollee enters its program for diabetes, the PBM notifies the enrollee’s physician and provides both the enrollee and the physician information on its disease management protocol. Regarding one such treatment, the PBM seeks to help reduce the risk of complications and costly additional care by encouraging enrollees to monitor their glucose levels and to adjust their insulin intake more frequently than is commonly recommended. The growth of PBMs and other industry developments have forced drug manufacturers to find ways to prevent profits from declining. At the same time that more drugs on the market face competition, purchasers have become more price-focused and organized. In particular, PBMs and other buyers have been able to use formularies to obtain significant rebates from manufacturers. Rather than lose market share, manufacturers have provided discounts on drugs that face competition to obtain inclusion and low-cost designation on PBMs’ formularies. Furthermore, many manufacturers believe that, in the future, pharmaceutical care will involve disease management. Currently, prescription drugs are managed separately from other components of health care. This approach may result in higher overall spending for a health plan sponsor than the management of all aspects of care for plan enrollees with similar illnesses. In response to a changing environment, large pharmaceutical manufacturers have vertically integrated into the market for PBM services. Merck was the first manufacturer to acquire a PBM partner when it purchased Medco in November 1993. In 1994, SmithKline Beecham acquired DPS and Lilly acquired PCS. Rather than acquire a PBM, Pfizer, Inc. contracted to form strategic alliances with two PBMs, Caremark International and Value Rx—plus Value Rx’s parent company, Value Health, Inc. Table 1 provides information about each merger or alliance. (See app. II for additional information on the companies involved in these ventures.) The manufacturers believe that merging or allying with a PBM will provide competitive advantages that will enable them to maintain profits. Among other things, each venture provides the manufacturer access to the PBM’s formularies, which can help a manufacturer increase market share while developing programs to compete in a market for disease management products. For example, formulary access can help to increase the market share of a manufacturer’s drug, particularly if it was not on the PBM partner’s formulary before a merger or alliance. Market share can be further enhanced if the manufacturer gives the PBM sufficient price discounts to gain a low-cost designation for its drug on the PBM’s formularies. According to representatives of several PBMs, their contacts with physicians to encourage them to prescribe drugs that are on formularies and have low-cost designations usually result in the physicians’ compliance. Because of the increase in market share resulting from formulary inclusion and low-cost designation, manufacturers may also reduce the sales and marketing costs for a product. The manufacturers also believe that PBMs will provide them the cornerstones of disease management programs, namely the abilities to uncover the most cost-effective treatments for various diseases, such as asthma and diabetes, and to ensure that patients comply with them. Specifically, the manufacturers and their PBM partners seek to contain health plan sponsors’ overall health care costs by establishing programs to encourage more cost-efficient care for patients with particular illnesses. The extent to which prescription drugs, particularly those sold by the manufacturer partners, will be used in these disease management programs will depend on their cost-effectiveness as part of overall treatment. However, because the ventures are new, it is too soon to determine whether each manufacturer has achieved its objective of enhancing profits by increasing market share and marketing disease management programs. Among the manufacturers we studied, only Merck has acknowledged an increase in its share of the drug sales managed by its PBM partner. In addition, the manufacturers and their PBM partners are in varying stages of developing disease management products and the success of these products is not yet known. Medco has six disease management programs either fully operational or in the pilot stage, including programs for diabetes and asthma. The other PBMs have launched either diabetes or asthma programs. However, all the PBMs are developing additional programs to treat these illnesses and others, including depression, ulcers, and cardiovascular disease. Critics of the recent mergers and alliances believe that the ventures will reduce competition in markets for pharmaceutical and PBM services. This concern is based on several contentions. First, competition in the pharmaceutical market would be reduced as aligned PBMs and their manufacturer partners collaborate to ensure inclusion and low-cost designation for the partners’ drugs over competitors’ on the PBMs’ formularies. This preference for a partner’s products would preclude other manufacturers from effectively competing with its products on the formularies managed by the PBM partner. Such preference would be exacerbated as the PBMs move to more restrictive formularies. Second, competition in the market for PBM services would be substantially lessened as the aligned PBMs would be able to obtain their partners’ products at extremely advantageous prices over nonaligned PBMs. This would give additional market power to the aligned PBMs, which already cover most health plan enrollees, and make it more difficult for new PBMs to enter the market or for smaller, existing PBMs to stay competitive. Several industry analysts contend, however, that it is too soon to determine the overall effects, either negative or positive, of the ventures on competition in the markets for either pharmaceutical products or PBM services. For example, these analysts contend that it is not possible to determine in the short term how competitive new or existing PBMs may be in this market. They believe that the PBM market may become more competitive as health plan sponsors begin to analyze the effectiveness of PBMs that represent them. They noted that if the PBMs that are the largest now do not continue to perform for their customers in controlling drug costs, the customers can switch to other PBMs. Industry analysts are more concerned, however, about the influence drug manufacturers may have on their PBM partners’ formulary decisions. They believe that any collaboration between aligned companies, or actions taken by a PBM partner, to ensure competitive advantages for the manufacturer partner’s drugs over competitors’ could reduce competition significantly in the manufacturer partner’s market, such as the market for an individual therapeutic class of drugs. Competitive advantages can be gained by eliminating opportunities for other manufacturers to compete for inclusion and low-cost designation for their drugs on the PBM partner’s formularies. FTC reviewed the recent mergers to determine their potential impact on the markets for drug manufacturers and PBMs. It issued a complaint against the Lilly/PCS merger and determined that safeguards were necessary to ensure that Lilly and PCS maintain a competitive process for determining which drugs to include on PCS’ formulary and the drugs’ cost designations. Accordingly, FTC entered into a consent agreement with Lilly, requiring that (1) PCS maintain an “open” formulary, defined as one that includes any drug that PCS’ P&T committee deems appropriate; (2) PCS appoint an independent committee to oversee this formulary, consisting of a majority of persons outside of either Lilly or PCS; (3) Lilly and PCS establish safeguards that prevent each from sharing nonpublic information concerning other drug manufacturers’ and other PBMs’ bids, proposals, contracts, prices, rebates, discounts, or other terms of their mergers; and (4) PCS accept all discounts, rebates, or other concessions offered by other manufacturers and reflect these when determining the ranking of products on the open formulary. Manufacturers we studied and their PBM partners told us that they had established safeguards similar to those accepted by Lilly. Like PCS, the other PBMs indicated that they offer an open formulary, which the majority of payers adopt. With one exception, the PBMs also noted that they had already established independent P&T committees. Furthermore, officials for each PBM said that they had established “fire walls” that prevent the PBMs from providing their manufacturer partners with confidential price information, such as bids from other manufacturers. Industry observers agree that these fire walls are the most essential part of the Lilly/PCS agreement for ensuring a competitive bidding process. Officials from each PBM also told us that they continue to consider bids from manufacturers whose drugs compete with drugs sold by their respective partners. Since the Lilly agreement, Medco has developed written policies that establish and govern fire walls as well as other safeguards that are intended to address FTC’s concerns. Critics of the Lilly/PCS merger have contended that the safeguards established by FTC in the consent agreement are inadequate to address their concerns about the venture’s potential anticompetitive effects. For example, before final approval of the consent agreement, NACDS contended that the agreement did not address the issue of aligned PBMs having the option to develop closed formularies that could favor their manufacturer partners’ drugs and exclude those sold by competitors. Furthermore, NACDS believed that the fire walls were inadequate to prevent the exchange of sensitive competitive information between aligned companies, including market shares for specific drugs. In addition, NACDS expressed concern that the agreement did not address the merger’s potential effect on drug prices paid by retail drug stores and consumers. In addition to approving the Lilly consent agreement, FTC said that it would continue to monitor several aspects of vertical integration of drug manufacturers and PBMs. Such monitoring includes whether and to what extent products of drug manufacturers, especially those not vertically integrated with PBMs, are prohibited (foreclosed) from formularies managed by aligned PBMs. The monitoring also includes whether and to what extent the vertical integration of drug manufacturers and PBMs results in anticompetitive interaction among integrated companies as well as any increase in drug prices or reduction in choice of drugs for consumers. Determining whether PBMs involved in these ventures maintain fire walls and refrain from collaborating to give preference to their manufacturer partners’ drugs requires access to proprietary information. Such information includes the process used by a PBM to consider which drugs are to be added to or deleted from a formulary, the reasons for changes, and whether competitive bids were sought and considered. To obtain such information requires an extensive right of access, such as that given to FTC. Absent proprietary information from PBMs related to formulary development, changes in formularies can be reviewed to determine whether there are signs of potential problems. For example, if a pattern developed in which a manufacturer partner’s drugs received the lowest-cost designations on its PBM partner’s formularies, it would raise questions from competing manufacturers and others about the process used by the PBM to make such formulary decisions. We reviewed formularies managed by Medco and DPS several months before and after their mergers to determine any changes in the preference given to their respective manufacturer partner’s products. Two months before concluding its agreement to merge with Merck, Medco increased its preference for Merck drugs by adding a number of Merck’s large-dollar-volume products to its formulary and dropping several drugs that competed with Merck’s drugs. In contrast, the number of SmithKline Beecham’s products on DPS’ formulary and their cost designations changed little. In January 1993, few Merck products were on Medco’s recommended formulary. Of the eight Merck products that represent almost all Merck sales to Medco enrollees, only Proscar was on Medco’s formulary.However, according to Medco officials, Merck and Medco established an agreement to add the remaining seven products to Medco’s formulary during May 1993, 2 months before reaching their decision to merge and 6 months before closing their merger. Specifically, these products were Prinivil and Vasotec, two cardiovascular drugs known as ACE inhibitors;Mevacor and Zocor, two cholesterol-lowering agents; Prinzide and Vaseretic, two antihypertensive combination drugs; and Pepcid, an antiulcer drug known as a histamine H receptor antagonist. Including these products increased the number of drugs in their respective therapeutic classes on the formulary, except for Prinivil and Prinzide, which replaced their chemical equivalents, Zeneca’s Zestril and Zestoretic. Table 2 shows changes to Medco’s formulary from 1994 to 1995 that could benefit the sale of Merck products. For example, between 1994 and 1995 one cardiovascular drug, Monopril, was dropped from the formulary. This change left Prinivil and Vasotec with fewer competitors on the formulary and Prinivil with one, rather than two, competitors with the lowest cost designations. Not only have cardiovascular drugs been Merck’s top-selling class of drugs in worldwide sales, but Vasotec has been Merck’s number one sales product. Table 2 also shows that, by 1995, Zocor and Mevacor faced fewer competitors after three non-Merck products were dropped from the cholesterol-lowering class. As with the cardiovascular class of drugs, Merck has dominated worldwide sales in the cholesterol-lowering class. In contrast to these gains, however, Merck products in the antihypertensive combinations and H antagonist classes were, by 1995, less competitive on the basis of cost designation. Table 2 shows that since 1994 the number of other manufacturers’ antihypertensive combination drugs that compete with Prinzide and Vaseretic increased from eight to nine. Also, most of these products retained the same or a lower cost ranking than both Merck products. Likewise, because a competing product (cimetidine, the generic version of Tagamet) achieved a new, lowest cost designation, Merck’s Pepcid now shares the second to lowest dollar-sign designation with Lilly’s Axid, rather than the lowest cost ranking among Hcholesterol-lowering class but also two competitors, Bristol-Myers Squibbs’ Pravachol and Sandoz’s Lescol. In response to these concerns, Medco officials told us that Merck’s products were included on Medco’s formulary through careful and fair P&T committee and other company deliberations that considered both the medical value and costs of competing drugs. They added that Medco did not exclude any drugs from its formulary because they compete with large-dollar-volume Merck products. Before the SmithKline Beecham/DPS merger in May 1994, DPS’ formulary contained SmithKline Beecham’s four largest-dollar-volume outpatient drugs. Distributed among four therapeutic classes, these were Augmentin, an antibacterial penicillin drug; Tagamet, an H antagonist; Relafen, a nonsteroidal anti-inflammatory drug (NSAID); and Paxil, an antidepressant referred to as a selective seretonin reuptake inhibitor (SSRI). Tagamet was in a higher cost category than one competitor, while Paxil shared the same cost designation with the two others listed in its class. Augmentin and Relafen not only faced generic competition but also, along with others, had the highest cost designation among brand-name products in their respective classes. Table 3 shows that following the merger, the number and cost designation of SmithKline Beecham’s large-dollar-volume products on DPS’ formulary remained largely unchanged. For example, Famvir, an antiviral therapy introduced during the third quarter of 1994, was added to the formulary for 1995, but Tagamet’s generic equivalent is now available. In addition, although table 3 shows that Paxil lost one competitor and gained a lower cost ranking than the remaining product, the table also shows that Relafen gained both an additional competitor and a higher cost designation. Furthermore, table 3 shows that Augmentin continued to have the same number of competitors and the highest cost designation in its class. Our review of changes in Medco and DPS formularies is but one way to help assess how the independence of PBMs may have changed since their mergers with manufacturers. PBMs in our study contend that they remain independent of their manufacturer partners in serving their customers, particularly in containing their customers’ overall drugs costs. Although Medco’s preference for Merck products increased substantially 2 months before their merger agreement, the results of our review of formulary changes do not necessarily mean that changes in Medco’s, or any other aligned PBM’s, formularies were the result of anticompetitive behavior on the part of the PBMs or manufacturers. However, changes in formularies can serve as an indicator that additional questions may be warranted about the processes aligned PBMs use in making formulary decisions. Given FTC’s antitrust role, its access to proprietary information, and its experience in reviewing recent mergers, our findings support FTC’s decision to continue monitoring ventures involving drug manufacturers and PBMs to assure participants in the PBM and prescription drug markets that these markets remain competitive. A draft of this report was reviewed by officials of Merck, Medco, SmithKline Beecham, DPS, Lilly, FTC, and two leading analysts of the pharmaceutical industry. In general, they agreed with the information presented in the report. Where appropriate, the report reflects their technical comments. We will make copies of this report available upon request. The report was prepared by John C. Hansen, Assistant Director, and analysts Joel Hamilton and Patricia Barry. Please call Mr. Hansen at (202) 512-7105 if you or your staff have any questions about this report. To address the study’s objectives, we first determined the role of PBMs in the health care industry. We reviewed pertinent literature and interviewed officials of companies involved in the ventures. These companies included Merck & Co., Inc., SmithKline Beecham Corporation, Eli Lilly and Company, and their respective PBM subsidiaries: Medco Containment Services, Inc., Diversified Pharmaceutical Services, Inc., and PCS Health Systems, Inc. We also interviewed officials of Pfizer, Inc. and its allied partners, Caremark International, Inc. and Value Rx. In addition, we met with several Wall Street analysts familiar with the PBM market to obtain a history of its evolution. Second, to determine the objectives of the ventures, we again interviewed officials of the companies in our study. We also reviewed internal documents, press releases, and annual reports provided by these officials that helped expand on their comments. Third, to understand specific concerns about the mergers and alliances, we contacted nonaligned PBMs, health plan sponsors, and pharmaceutical economists. We also interviewed officials of pharmaceutical trade associations, such as the National Association of Chain Drug Stores and the American Pharmaceutical Association. We asked these sources about changes to the pharmaceutical industry following the mergers and alliances as well as their views on the conditions established by FTC in its consent agreement with Lilly. In addition, we reviewed public comments FTC received regarding Lilly’s acquisition of PCS and asked officials of the companies in our study whether they had policies or procedures that would meet the conditions set forth in the consent agreement. Fourth, to assess the extent to which PBMs may have given preference to their manufacturer partners’ drugs over competitors’ drugs, we compared formularies for DPS and Medco before and after the mergers. We compared formularies that existed several months before each merger to 1995 formularies to determine changes to (1) the drugs listed and (2) the cost designation of the manufacturer partner’s drugs versus other manufacturers’ drugs. We reviewed formulary changes for DPS and Medco because they were the PBMs involved in mergers for the longest period of time and, therefore, had had the most time to make any formulary changes. Our work was performed between June 1994 and September 1995 in accordance with generally accepted government auditing standards. The various manufacturer and PBM ventures are similar in that each one provides a manufacturer access to a PBM’s formularies and aggregate data concerning its enrollees. This enables the manufacturer to improve its marketing strategies, enhance market share, and develop disease management programs. The mergers and alliances are described below. On November 18, 1993, Merck & Co., Inc. purchased Medco Containment Services, Inc. for $6.6 billion. Headquartered in Whitehouse Station, New Jersey, Merck manufactures human and animal health care products. During 1993, it had net revenues of $10.5 billion, making it the largest company in terms of U.S. pharmaceutical sales. Principal products include Prinivil and Vasotec, two cardiovascular products; Mevacor and Zocor, two cholesterol-lowering agents; and Pepcid, an antiulcerant. At the time of its acquisition, Medco, based in Montvale, New Jersey, was the second largest PBM, covering more than 33 million lives and managing about 95 million prescriptions or $4 billion in drug expenditures annually. During 1995, Medco expects to manage benefits for about 40 million people and remain the second largest PBM. Immediately after the merger, Medco operated as a subsidiary of Merck under Medco’s existing senior management. In January 1994, Merck and Medco formed the Merck-Medco U.S. Managed Care Division, which initially included a unit that marketed Merck products to managed care organizations as well as Medco, which marketed PBM services to health plan sponsors. The Merck managed care product unit was transferred back to Merck’s Human Health Division in October 1994. The Merck-Medco Managed Care Division now consists of Medco only and no longer has any responsibility for managed care product sales. In early 1995, Merck formally adopted a policy under which Medco operates independently of Merck. Merck markets its pharmaceutical products through its U.S. Human Health Division. Following the Merck/Medco merger, SmithKline Beecham Corporation, the U.S. operating subsidiary of United Kingdom-based SmithKline Beecham plc, announced on May 3, 1994, that it would acquire Diversified Pharmaceutical Services, Inc. (DPS) from United HealthCare Corporation for $2.3 billion in cash. Based in Philadelphia, SmithKline Beecham manufactures therapeutics for human and veterinary use and was the seventh largest manufacturer in terms of U.S. pharmaceutical sales for 1993. Its products include Tagamet, an antiulcerant; Relafen, a nonsteroidal anti-inflammatory drug; Famvir, an oral antiviral; and Paxil, an antidepressant. Bloomington, Minnesota-based DPS was founded in 1976 as a wholly owned subsidiary of United HealthCare Corporation, an operator of HMOs, preferred provider organizations, and other health care organizations. During 1993, DPS was the third largest PBM, managing pharmaceutical benefits for about 14 million people or $2 billion in drug expenditures. Following its acquisition, DPS continued to operate as an independent company under its existing senior management. In addition to acquiring DPS, SmithKline Beecham will maintain, for a minimum of 6 years, a two-part relationship with United HealthCare that SmithKline Beecham believes provides advantages over other manufacturer/PBM partnerships. First, SmithKline Beecham will have exclusive rights to the medical records of United HealthCare’s 1.6 million members. When integrated with drug utilization data, such data could substantially augment studies concerning cost-effective drug treatments and the development of disease management programs. Second, United HealthCare plans to continue to use DPS as its PBM for its own managed care operations, encourage affiliated plans to rely on DPS, and not compete with DPS in the pharmacy benefit management business. In November 1994, Eli Lilly and Company purchased PCS Health Systems, Inc. from McKesson Corporation for $4 billion in cash. Located in Indianapolis, Indiana, Lilly manufactures pharmaceuticals, medical devices, diagnostic products, and animal health products. In 1993, Lilly had net revenues of $6.45 billion and the fifth highest level of U.S. pharmaceutical sales. Its pharmaceutical products include Prozac, an antidepressant; Axid, an antiulcer agent; and Iletin and Humulin, antidiabetic agents. Based in Scottsdale, Arizona, and founded in 1968, PCS Health Systems was formerly a wholly owned subsidiary of McKesson Corporation, the world’s largest distributor of pharmaceuticals and related health care products. Originating as a claims processor, PCS has consistently ranked as the largest PBM. At the time of its acquisition, it administered pharmaceutical benefits on behalf of roughly 1,300 customers who accounted for over 50 million lives and as much as $9 billion in drug expenditures. Under the terms of the agreement, PCS will continue to operate as an independent company under its existing senior management. Also, McKesson will continue to have access to certain PCS capabilities and services, such as its information systems. In addition, Lilly has agreed to develop a series of strategic alliances with the remaining McKesson pharmaceutical distribution businesses. On May 3, 1994, Pfizer, Inc. announced a strategic relationship with Value Health, Inc., the parent company of Value Rx. New York-based Pfizer is a multinational producer and distributor of health care, animal health, food science, and consumer products. During 1993, it had net sales of $7.5 billion and ranked eighth among manufacturers in terms of U.S. pharmaceutical sales. Its health care products include Feldene, an anti-inflammatory agent; Procardia, a cardiovascular agent; and Zoloft, an antidepressant. Value Health is a provider of specialty managed care benefit programs and health care information services. It comprises six companies, including Value Health Sciences and Value Rx Pharmacy Program. Value Health Sciences, located in Santa Monica, California, is a provider of clinical software and physician review services. Value Rx, a PBM located in Scottsdale, Arizona, and Bloomfield Hills, Michigan, was the sixth largest at the time of the announcement, covering about 11 million lives. Although the financial terms of the various contracts were not announced, the relationship has three parts. First, in return for rebates, several Pfizer drugs will be included on Value Rx’s drug formularies. Second, Value Health Sciences has agreed to develop programs, such as clinical protocols, physician and patient education materials, and outcomes analyses, to increase physician and patient use of Pfizer products. Third, Value Health and Pfizer each contributed $50 million to fund a new company to establish disease management programs. Value Health has emphasized that, unlike an acquisition, this contractual relationship does not affect its operating independence. In a related event, during May 1995, Value Health announced its acquisition of Diagnostek, Inc. for $480 million. Headquartered in Albuquerque, New Mexico, and founded in 1983, Diagnostek is a provider of diagnostic-imaging centers, PBM services, and pharmacy services to institutions such as hospitals and nursing homes. Just before the merger, its PBM business unit covered approximately 16 million lives. Because of the acquisition, Value Rx will now cover approximately 32 million lives, making it the largest independent PBM and the third largest overall. Pfizer also partnered with Caremark International, Inc. during 1994. Headquartered in Northbrook, Illinois, and incorporated in 1992, Caremark International operates in two business segments: patient care and managed care. The managed care segment includes Caremark’s Prescription Service Division, a PBM and mail-service pharmacy. In 1994, it ranked fourth among PBMs, managing benefits on behalf of 1,100 customers who together covered about 13 million lives. Pfizer’s relationship with Caremark is a part of Caremark’s Drug Alliance Program. Established in April 1994, this program involves contractual relationships with four major pharmaceutical manufacturers: Pfizer; Rhone-Poulenc Rorer, Inc. of Collegeville, Pennsylvania; Bristol-Myers Squibb Company; and Eli Lilly. Although the amount Caremark received from each partner was not disclosed, each relationship gives the manufacturer access to both Caremark’s formulary and the drug utilization statistics of its covered lives. By partnering with four manufacturers, Caremark will receive rebates on products in over 85 percent of the therapeutic classes on its formulary. It also expects to gain advantages in the development of disease management programs by merging the research capabilities of each manufacturer. American Pharmaceutical Association, APhA Special Report: Opportunities for the Community Pharmacist in Managed Care (1994). Anders, George, “Drug Makers Help Manage Patient Care,” Wall Street Journal (May 17, 1995). Boston Consulting Group, The Changing Environment for U.S. Pharmaceuticals: The Role of Pharmaceutical Companies in a Systems Approach to Health Care (Apr. 1993). Broshy, Eran, Michael Hansen, and Richard Lesser, “The Three Strategic Choices for Drug Firms, Part 2,” In Vivo (June 1994), pp. 5-10. Etheridge, Lynn, “Pharmacy Benefit Management: The Right Rx?” Sponsored by the Health Insurance Reform Project of George Washington University, with funding from the Robert Wood Johnson Foundation (Apr. 1995). Freudenheim, Milt, “A Shift of Power in Pharmaceuticals: Manufacturers Yield to Managed Care,” The New York Times (May 9, 1994). Freudenheim, Milt, “F.T.C. Seeks Data on Caremark Alliances,” The New York Times (Nov. 24, 1994). Goldman Sachs U.S. Research, Drug Stock Overview (Apr. 1994). Greton, Carolyn, and Paul Wynn, “The Top 50 Companies,” Med Ad News, Vol. 13, No. 12 (Sept. 1994), pp. 5-143. Longman, Roger, “Merck-Medco and Disease Management,” In Vivo (Apr. 1994), pp. 29-34. Mandelker, Jeannie, “Pharmacy Benefits Management: The Next Generation,” Business & Health, Special Report (1995). McGahan, Anita M., “Industry Structure and Competitive Advantage,” Harvard Business Review (Nov./Dec. 1994), pp. 115-24. “Merck Market Share on Medco Formularies Reaches 12 Percent Up From 10 Percent; Merck Will Not Broaden Product Line to Maintain Market Share, CEO Gilmartin Tells NYSSA,” The Pink Sheet, Vol. 56, No. 49 (Dec. 5, 1994). Mishkin, Arnon, Phillipe Chambon, and Eran Broshy, “The Three Strategic Choices for Drug Firms, Part 1,” In Vivo (May 1994), pp. 7-11. Ordover, Janusz, Garth Saloner, and Stephen Salop, “Equilibrium Vertical Foreclosure,” The American Economic Review, Vol. 80, No. 1 (Mar. 1990), pp. 127-42. “PCS Manages 21 Percent of Retail Prescriptions—Walsh America Data; Out-of-Pocket Scripts Dip Below 50 Percent of Market, HMO Formulary Compliance Nears 90 Percent,” The Pink Sheet, Vol. 56, No. 11 (Mar. 14, 1994). Peterson, Carl E., “Disease Management: A Team Approach to Chronic Care,” HMO Magazine (May/June 1995), pp. 38-47. Peterson, Carl E., “Going Vertical: Pharmaceutical Integration,” HMO Magazine (Sept./Oct. 1994), pp. 50-54. Salinger, Michael, “Vertical Mergers and Market Foreclosure,” The Quarterly Journal of Economics, Vol. 80, No. 1 (May 1988), pp. 345-56. Smith Barney Shearson, Drug Industry: What It Could Look Like by Year 2000 (Mar. 29, 1994). SmithKline Beecham, Perspective of Pharmaceutical Benefit Management. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed mergers between pharmacy benefit managers (PBM) and pharmaceutical manufacturers, focusing on: (1) PBM role in the health care industry; (2) the mergers' objectives and effect on competition; and (3) the extent to which PBM have given preference to their manufacturer partners' drugs. GAO found that: (1) drug manufacturers have allied with PBM to help maintain their profits in an increasingly competitive marketplace; (2) PBM help health plan sponsors administer prescription drug benefits and help them contain their overall drug costs; (3) manufacturers rely on their PBM partners to develop new programs for treating specific diseases and increase the market share for their drugs; (4) critics of PBM alliances are concerned that the companies involved could act to restrict competition among manufacturers for inclusion on PBM formularies; (5) variations exist in the extent to which PBM have given preferences to their manufacturer partners' drugs; and (6) the Federal Trade Commission monitors PBM alliances to help ensure that PBM maintain competitive processes that allow other manufacturers to compete for low-cost designation for their drugs on PBM formularies.
|
The current ERISA debate stems primarily from the act’s preemption of state laws that “relate to” employer- or union-sponsored health plans, which provide coverage for about 140 million Americans. In general, ERISA prevents states from regulating employer health plans but allows them to regulate the terms and conditions of health insurance sold in the state. Thus, for example, states cannot require employers to provide health care coverage, but they can require that all health insurance policies sold in the state include specific benefits (for example, mental health benefits). This results in a very different regulatory framework depending on whether the employer purchases its health care coverage from an insurer, which the state regulates, or self-funds its health plan, avoiding many state regulations. Although ERISA prevents states from directly regulating employer health plans, it does impose certain federal requirements on all health plans. These include reporting requirements providing, for example, that information about each plan be reported annually to the Department of Labor; disclosure requirements ensuring that plan participants and beneficiaries have access to information about the plan; fiduciary obligations prohibiting conflicts of interest and imposing certain fund management and investment practices; and plan claims filing procedures including, for example, a process for appealing claim denials. ERISA does not, however, require employers to provide or maintain a minimum level of health benefits nor to set aside funds to pay for expected health claims. ERISA requirements apply to all private employer-based health plans, whether fully insured through a third party or self-funded. In a self-funded health plan, an employer directly holds much of the financial risk associated with its employees’ health care costs. Often, an employer that self-funds simplifies its administrative burden by contracting with an insurance company or other organization to perform administrative services. In addition, an employer that self-funds often purchases stop-loss insurance that moderates its risk by capping the amount of claims it will pay directly for either an individual or the group. Court decisions on the scope of ERISA preemption continue to affect the nature and structure of employer-based health plans. ERISA was initially passed primarily in response to concerns about the solvency and security of employer-based pension plans, but its preemption clause made it possible for employers to provide all employee benefits—including health plans—largely free from state regulation. The impact of ERISA has become increasingly significant as the number of self-funded health plans has grown. The original ERISA preemption language was sufficiently ambiguous that courts have had to elaborate on its scope. Courts have tried to delineate how closely state laws must relate to employer health plans to be preempted. In Metropolitan Life Ins. Co. v. Massachusetts, a unanimous Supreme Court identified a crucial distinction under ERISA between the treatment afforded health plans that are self-funded and those that are fully insured. The Court’s decision permitted states to generally enforce laws that apply to insurers even though this would impact the employee health plans that they insure. In effect, this decision has produced a divided regulatory system: the federal government retains the sole authority under ERISA to regulate employer-based health plans but not health policies sold by insurance companies; states can regulate health insurance companies and their policies but not the plans provided by employers. Thus, insured health plans are subject to specific consumer protections, state-mandated benefit laws, premium taxes, any-willing-provider laws, and participation in community-rated or high-risk pools; self-funded health plans are not. The distinction between self-funded and fully insured health plans does not, however, extend to health care coverage offered by federal, state, and local governments. For example, health plans offered through the Federal Employees Health Benefits Program (FEHBP), although not self-funded, are not subject to many state regulations such as insurance premium taxes and mandated benefits. Similarly, some state health plans are legally exempt from compliance with state requirements, although these plans often do comply or, in some instances, have legal requirements that apply only to state employee health plans. Table 1 categorizes and summarizes regulatory differences among employer-based health plans. The courts continue to delineate what state actions are allowed or preempted under ERISA. Recently, the Supreme Court issued its decision in New York State Conference of Blue Cross & Blue Shield Plans v. Travelers Ins. Co. This decision did not delineate fully between state actions that are preempted and those that are not but indicated that courts may approve state actions that do not conflict with ERISA’s underlying objectives or impact too greatly on employee benefit plans. In the wake of the Court’s ruling in Travelers, states are likely to perceive that they have more options and greater flexibility than previously recognized. In particular, the decision permits New York and other states to adopt hospital rate setting systems and may permit states to tax providers. The case suggests that state laws affecting employee health plans will have to be judged individually on the facts and circumstances in each case. The nature and magnitude of the impact on employee benefit plans of each state law at issue will determine the outcome. In cases in which the state law does not conflict with ERISA’s objectives, it should survive legal challenge. The Travelers case leaves substantial questions unresolved about ERISA preemption that may need to be resolved through further litigation. Although ERISA and court decisions have produced a sharp distinction in the regulatory status of self-funded and insured health plans, most employer plans can be categorized as ranging from full insurance to complete self-funding. Clearly distinguishing between self-funded and fully insured plans is growing more difficult as the health market changes. Among factors contributing to the confusion are more extensive use of stop-loss coverage and innovative risk-sharing arrangements between employers and managed care organizations. The level of stop-loss coverage that a self-funded employer purchases is one factor that influences where an employer’s plan fits within this range: A plan with a low stop-loss threshold self-funds a smaller share of its risk than a plan with a high stop-loss threshold. Particularly among smaller employers, some health plans have stop-loss coverage beginning at a relatively low level of health claims. In addition, many employers that offer self-funded health plans also provide insured coverage to some employees. For instance, many employers that provide a self-funded plan also offer their employees a choice of one or more health maintenance organizations (HMO) that may not be self-funded. Some employers, however, are beginning to adopt alternative financing arrangements with managed care plans that place some financial risk with the employer as well as the plan and its providers. Some employers may also provide coverage for specific conditions, such as cancer or mental health care, through a separate plan that may be either insured or self-funded. In many cases, employees will not know whether their employer-based health plan is self-funded or purchased through an insurer, especially since commercial insurers often provide administrative services for self-funded health plans. Data on the number and characteristics of self-funded ERISA plans are scant largely because efforts to collect this information on the federal level have been limited. The incomplete data that do exist, however, indicate that self-funding has increased recently, both among small and large firms. Employers have increasingly self-funded to better manage their costs, through greater control over their health benefits and plan assets, as well as to maintain uniformity in health plans that cross state borders. The federal government is the only entity that can collect complete data on the number and characteristics of self-funded plans because ERISA preempts state efforts to require employers to provide health plan data. Because (1) the current federal reporting requirements focus on pension plans rather than health plans, (2) health plans with fewer than 100 participants are generally exempt from reporting, and (3) inconsistencies exist among the data reported for health plans, the current data are of little value in assessing the number or characteristics of employers that self-fund their health plans. Furthermore, the Department of Labor is currently considering revisions, but whether this would enhance or reduce the information available on self-funded health plans is unclear. The lack of a clear distinction between self-funded and insured health plans also contributes to the difficulty in estimating the number of individuals enrolled in self-funded plans. Particularly as the distinction between self-funded and fully insured plans has blurred due to the increased use of stop-loss coverage and alternative funding arrangements with HMOs, surveys and employers inconsistently report whether a plan is self-funded or fully insured. Despite incomplete data, our analysis of employer benefits surveys shows that in 1993 approximately 44 million individuals, or 17 percent of the U.S. population, were enrolled in self-funded ERISA health plans (see fig. 1). An additional 27 percent of the population, or about 69 million individuals, were enrolled in insured plans that are also subject to ERISA. Thus, a total of nearly 114 million Americans were enrolled in ERISA plans. The remainder of the population either had coverage from a government or church employer (27 million), Medicare (31 million), Medicaid (24 million), individual insurance (20 million), or Department of Veterans Affairs (VA) or Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) health plans (5 million); 40 million individuals were uninsured. The number of individuals enrolled in self-funded plans appears to be growing. On the basis of calculations we made from existing data sources, the total number of individuals enrolled in self-funded plans increased by nearly 6 million from 1989 to 1993. This growth is occurring in firms of all sizes. As shown in figure 2, the percentage of plan participants enrolled in self-funded health plans has increased from about 28 percent in 1986 to about 46 percent in 1993 in medium sized and large private establishments (those with at least 100 employees). Growth in self-funding appears to be occurring in small firms as well. In 1992, 32 percent of plan participants covered by private establishments with fewer than 100 employees were in self-funded plans; in 1990, 28 percent of plan participants in such firms were in self-funded plans. Limited evidence shows that even smaller firms—those with fewer than 50 employees—are beginning to self-fund. For example, a trade association representing self-funded plan interests provided examples of employers with as few as 13 employees that chose to self-fund. One third-party administrator we contacted has rapidly expanded its business among small self-funded employers. By 1994, this firm had contracts with more than 2,300 self-funded firms with 50 or fewer employees, including 132 firms with fewer than 10 employees. The growth in self-funding in small and large firms reflects employers’ recognition that self-funding employee health benefits offers several advantages. Employers believe that self-funding allows them to directly gain from their cost-containment efforts by having plan design flexibility, control of premium assets, and reduced administrative costs. In addition, employers’ self-funding allows them to avoid potentially costly state regulation, including premium taxes, reserved funding requirements, benefit mandates, any-willing-provider laws, and participation in community-rated or high-risk pools. Employers also indicate that the ability to maintain national uniformity in plan design and benefits through self-funding enhances employee relations. As self-funding has grown, states have lost regulatory oversight over a growing portion of the health market. Between 1989 and 1993, we estimate that the number of self-funded plan enrollees increased by about 6 million individuals, and the number of privately insured individuals that state insurance commissioners regulate declined even further as more individuals became uninsured or enrolled in Medicaid or Medicare. With these changes, states are concerned that they cannot provide consumer protections to self-funded health plan participants and that their ability to tax and collect data on health plans is eroding. More broadly, states view ERISA preemption as an obstacle to their adopting a wide range of health care reform strategies. Given the improbability of federal reforms to achieve universal coverage in the near future, many state governors and legislators are seeking an active role in expanding the number of individuals covered and in controlling health care costs. The response to the 1994 national health care reform debate and the views of recently elected governors and state legislators may have increased opposition to comprehensive reform in some states, but the impetus for incremental changes remains strong. States believe that ensuring adequate consumer protections will become increasingly difficult as more firms self-fund, exempting them from state insurance regulation. Our analysis of CPS data indicates that the number of individuals under state insurance commission oversight (that is, those with insured health plans offered through employers or purchased individually) declined by nearly 8 million between 1989 and 1993. Much of this decline was attributable to growth in the uninsured population and Medicare and Medicaid enrollment. However, enrollment in self-funded health plans increased by nearly 6 million during this time, also contributing to the decline in the number of insured health plan participants under state oversight. Although little evidence exists to substantiate self-funding’s adverse effect on plan participants, state regulators are concerned that federal fiduciary standards and their enforcement may not provide sufficient consumer protections for these participants. State regulators are particularly troubled that firms that they believe cannot adequately absorb the costs of self-funding will nonetheless choose this option. Although only anecdotal evidence exists of the difficulties facing small firms if they self-fund, in part because states do not have access to this information, some regulators contend that firms with fewer than 500 employees should not be completely self-funded. However, the size of firm that can adequately bear risk is subject to debate, especially since small firms can purchase stop-loss coverage to moderate their exposure to large, unexpected losses. In addition, disagreement exists about whether a firm’s size should be the measure of its ability to self-fund rather than its wage structure or financial condition. For example, some believe that even very small firms with relatively large assets can self-fund safely with adequate stop-loss coverage. Of more concern to state regulators than small firms’ purchase of traditional stop-loss coverage, however, are new stop-loss insurance products that more closely resemble traditional health insurance products with a high deductible. These products allow small firms to self-fund, avoiding state regulation, while only bearing a small portion of the risk. To address this issue, the National Association of Insurance Commissioners (NAIC) is developing a model act that would define minimum stop-loss coverage levels, preventing the sale of products that are merely a subterfuge for traditional health insurance. New York, Oregon, and North Carolina have already tried to address this issue by prohibiting or limiting the sale of stop-loss coverage to small firms. In addition to states’ concerns about the loss of regulatory oversight due to the increase in self-funded plans, states view ERISA as an obstacle to enacting comprehensive reforms and to adopting the more modest administrative simplification and insurance regulation proposals on which many states are focusing. ERISA clearly preempts state laws that mandate employers to offer or contribute to coverage. In addition, according to a report of the National Governors’ Association (NGA), the following are potential state actions prohibited due to judicial interpretations of ERISA: “establishing minimum guaranteed benefits packages for all employers; developing standard data collection systems applicable to all . . . health developing uniform administrative processes, including standardized claim establishing all-payer rate-setting systems; establishing a statewide employer mandate; imposing a level playing field through premium taxes on self- plans; and imposing a level playing field through provider taxes where the tax is interpreted as having an impermissible direct or indirect impact on self- plans.” The Supreme Court’s recent decision in Travelers, however, may have provided states more flexibility in some areas, particularly rate setting and provider taxes, than reflected in the NGA list. Because several states have passed comprehensive reform legislation that would likely be preempted by ERISA, some states have petitioned their congressional delegations to propose legislation granting broad exemptions from ERISA. Only Hawaii has succeeded in obtaining a statutory exemption from ERISA, enabling the state to mandate employers to provide health care coverage. Other states that have not tackled comprehensive reform have sought more limited ERISA exemptions for specific regulatory or tax initiatives. For example, before the Supreme Court ruling earlier this year, New York sought to amend ERISA to allow the state to continue taxing hospital services. Finally, to balance state desires for additional regulatory authority over self-funded health plans with business concerns, some state representatives have proposed establishing additional federal standards to apply to all health plans. Several states have passed comprehensive health care reform legislation, including employer-mandated coverage or play-or-pay systems, that would likely be preempted without an exemption from ERISA. Although several states continue to seek waivers, these states’ commitment to their enacted reforms is fading. Indeed, implementation has been delayed in most states in part because of concerns about an ERISA challenge but also because of several other key factors. These factors include changes in governors and state legislative representatives, constrained state budgets, difficulty in passing necessary financing measures, and the opposition of small businesses. For example, Massachusetts has delayed the implementation of a play-or-pay system several times since its enactment in 1988, and the current governor seeks its repeal. In 1995 Washington repealed the employer-mandated health care coverage passed in 1994 because the newly elected legislature opposed it. Opponents of the employer mandate in Oregon anticipate that a sunset clause in the legislation will obviate the need for an outright repeal. Although several states have retreated from comprehensive health care reforms, many continue to seek narrower reforms, including taxing authority and data collection, that ERISA may also preempt. States maintain that they should have the right to apply taxes uniformly to all participants in the health care market without ERISA’s shielding a group of employers. Because self-funded plans are shielded from premium and other taxes, as well as participation in high-risk pools, a disproportionate share of the cost of state programs to improve access or expand coverage could fall on insured plan participants. States would also like to collect data to adequately assess characteristics of their health care market and the effectiveness of their small group reforms. States would like data partly because they fear that increased self-funding in small firms will lead to only the high-risk and high-cost groups remaining in the small group insurance market. For example, although Massachusetts officials no longer seek an ERISA waiver to implement their play-or-pay provision, they would like some narrower changes to ERISA to allow them to collect information on self-funded health plans to measure the success of their small group reforms. Many employers, particularly larger self-funded firms, view ERISA preemption of state regulation of employer health plans very differently from the states: they view it as a fundamental strength of a voluntary employer-based health care system. They note that preemption was designed to provide uniform rules for all employers and to prevent states from imposing 50 different regulatory approaches to health care. In general, they view private market decisions as a more effective tool in managing the nation’s health care system than a government-sponsored system or state regulation. Although employers focus on different aspects of the ERISA debate, they are generally opposed to granting the states greater flexibility. They believe that any change in ERISA may lead to state requirements that would hinder their ability to manage the cost and quality of their employees’ health care. Also, employers are concerned that greater state flexibility will mean higher costs for them, either through additional administrative burden, taxes, or increased litigation resulting from changes in the ERISA appeals process. They have expressed concerns that if changes to ERISA significantly raise their costs, they may have to reevaluate their voluntary provision of health benefits. Employers maintain that ERISA preemption provides the framework for them to manage the cost of their employees’ health care coverage. They cite several recent studies and reports, as well as their own experience, as evidence of their initiatives’ effectiveness. For example, employer surveys by Foster Higgins, an employer benefits consulting firm, indicate that average costs for employer-based health coverage decreased 1.1 percent between 1993 and 1994. They are concerned that changes to ERISA that either grant greater state flexibility or impose federal standards may severely hamper their cost-containment efforts. Employers point to current state-mandated benefits, any-willing-provider laws, and risk pooling in the insured market as examples of state actions that would undermine their recent cost-containment and quality enhancement strides. Employers argue that benefit mandates, if applied to self-funded plans, may limit their ability to alter benefits offerings to control costs. For example, employers cite that their ability to change mental health benefits from a limited number of inpatient and outpatient days to a more flexible case management system has saved money and improved quality of care. They believe that a mandated mental health benefit as adopted in some states for insured plans would restrict benefit design and not allow this innovation. Employers are also concerned that states’ any-willing-provider laws may severely impact employers’ cost-containment efforts. They believe that their increased reliance on managed care has been integral to lowering their health care costs. Because the basic tenets of managed care are to limit choice of provider to control utilization and ensure adequate patient volume and provider quality, employers argue that if the law requires managed care plans to accept all providers meeting certain criteria, managed care will lose its ability to control health costs. Although the courts are deciding the scope and extent of any-willing-provider laws, these laws have not been applied to arrangements between self-funded plans and managed care providers. Employers oppose any amendments to ERISA that would extend the reach of these laws. In addition, employers generally oppose amendments to ERISA that would allow states to include self-funded plans in community-rated pools. In community-rated pools, health costs are spread more evenly among the participants in the pool without reflecting the employer’s actual claims experience. Thus, an employer with previously higher than average health care costs would see those costs reduced, and one with lower than average costs would see them increased. Employers argue that community rating removes nearly all incentives for them to innovate to control costs because the savings do not accrue to the employer but to the whole community. Employers are concerned that amendments to ERISA that increase state flexibility will result in higher administrative costs and higher taxes, either directly or through an employer mandate. Also, they believe that ERISA changes may cause them to lose other advantages of self-funding, such as control over plan assets, and expose them to expensive lawsuits arising from health care claim denials. Employers oppose ERISA amendments that would grant states regulatory authority over self-funded plans. Large and small firms with workers in many states view the prospect of different state reform initiatives and regulatory systems as cumbersome, costly, and unnecessary. The administrative burden may be especially acute in the 41 U.S. metropolitan areas that cross state boundaries. However, measuring the potential cost of compliance with differing state administrative requirements would depend largely upon the variance in regulations that states adopt as well as how employers design and administer their plans. Multistate employers maintain that compliance with multiple systems or requirements will hinder their ability to preserve nationwide uniformity in their health plans, harming employee relations and weakening cost-control initiatives. By maintaining a uniform benefits plan, employers can provide equitable benefits to employees in different geographic locations, transfer employees without disrupting benefit coverage, and collectively bargain on a nationwide basis. For these reasons, to the extent that employers support health care reforms, they prefer uniform national standards to varying state standards. Employers are concerned that ERISA may be modified to permit states to directly or indirectly tax employer health plans. Employers contend that many states that are experiencing severe financial constraints, in part due to rapidly increasing Medicaid costs, may seek the authority to tax employers who already provide health coverage. Employer groups maintain that they do not necessarily oppose state programs to improve access but believe that states should fund their initiatives through generally applicable taxes clearly within the scope of their authority, even if politically unpopular. This would more fairly distribute the burden of providing health care coverage to the uninsured rather than create an incentive for employers to not offer coverage. To illustrate their point, employers point to state taxes, even typical ones like premium taxes of 2 to 3 percent, that may create significant costs as health care costs become an increasing share of total employee compensation. For example, these taxes would cost between $10 million to $15 million if applied to some Fortune 100 firms that spend more than $500 million on their employees’ health coverage. Moreover, employers note that states may more easily increase these taxes if states are not restrained by firms’ ability to easily exit the insured market. Furthermore, the costs that employers incur from state taxation may increase as more states turn to provider taxes as a financing source, especially after the Supreme Court’s recent ruling upholding states’ ability to impose comprehensive rate-setting schemes that essentially function like provider taxes. Employers have also expressed concerns that state solvency standards requiring the establishment of reserves will force them to restrict a portion of their plan assets that could be used for other purposes. This loss of control could amount to an increase in their overall costs, particularly when interest rates are high. Moreover, employers believe that solvency standards for self-funded plans are unnecessary because few plan failures have occurred, even in smaller firms with stop-loss coverage. Finally, employers—whether they self-fund or purchase insurance—fear that they may lose ERISA protections from potentially exorbitant damages stemming from disputes over denied health claims. Employers believe that ERISA’s requirement for an internal appeal adequately ensures that employees’ grievances are fairly represented, although some maintain that it is burdensome. In return for establishing an appeals system, employers receive immunity from what they perceive as tort system excesses. In particular, employers are not subject to punitive or compensatory damages resulting from inappropriately denied claims. Employers view liability for denied claims as a potentially expensive issue that could force them to discontinue their health plans if ERISA is amended. ERISA’s role in health care is poorly understood. In large part, confusion over ERISA stems from a lack of well-developed data and information to assess conflicting contentions about the potential costs and benefits of ERISA as it relates to health care. Indeed, both states and employers argue that they must play a more active role in managing the quality and costs of health care, yet their beliefs are largely based on strongly held philosophical arguments. Key elements of these arguments include the appropriate role for government, the appropriate distribution of health care costs, the primacy of the private market, and the division of responsibilities between federal and state governments. Due to these arguments, ERISA reform promises to be a challenging issue for the Congress. Department of Labor officials provided us with comments on a draft of this report. (See app. VI.) They pointed out that the perspectives of participants and beneficiaries were addressed only to a limited extent and that more information would be useful. However, our primary focus, as agreed to with our requesters, was the perspectives of states and employers on ERISA preemption. We acknowledge that the perspectives of participants and beneficiaries are also important, but they are more diffuse and difficult to categorize. Labor officials also stated that the Pension and Welfare Benefits Administration provides more technical assistance to participants, beneficiaries, and the general public, and we changed the report to include that information. Labor officials also provided technical comments, which we incorporated where appropriate. Please call me on (202) 512-7119 if you or your staffs have any questions about this report. This report was prepared under the direction of Mark V. Nadel, Associate Director of National and Public Health Issues. Other major contributors are listed in appendix VII. The Employee Retirement Income Security Act of 1974 (ERISA) and its implications for health plans are often misunderstood. Much of this confusion results from the act’s focus on pension plans rather than health plans and the ambiguity of some of the legislative language, which has been interpreted in a variety of lower federal and Supreme Court decisions. ERISA was passed primarily in response to concerns about the solvency and security of employer-based pension plans. These concerns arose because many retirees did not receive anticipated retirement benefits in several well-publicized cases. ERISA imposed minimum vesting requirements on employer pension plans to guarantee that employees receive a right to such benefits within a reasonable time after beginning their employment. It also established funding requirements, providing that employers reserve funds to ensure that they are available to pay those benefits when the employee retires, and established a system of plan termination insurance to provide for benefit payments even if an employer terminates a defined-benefit plan. In addition to pension plans, ERISA regulates “employee welfare benefit plans,” which include employer health plans. Therefore, although only a limited discussion occurred during the Congress’ initial consideration of ERISA regarding its impact on employee health plans, all health plans established and maintained by an employer are covered by ERISA.Because the Congress was principally concerned with pension plan reform when ERISA was enacted, ERISA established stricter requirements for pension plans than for welfare benefit plans. For example, health plans are not subject to the participation, vesting, and funding requirements that pensions are. However, health and other welfare benefit plans must comply with ERISA’s reporting and disclosure procedures, fiduciary standards, and claims appeal requirements. ERISA requires all covered employer-based health plans to file Forms 5500 (Annual Return/Report of Employee Benefit Plan) for the Department of Labor. These reports provide periodic information on plan participants and finances. ERISA also requires plans to give plan participants and beneficiaries a summary plan description (SPD). The SPD is the basic document that gives the plan beneficiary the plan’s details and describes, in understandable terms, their rights, benefits, and responsibilities under the plan. A copy of the SPD and a statement of ERISA rights must be furnished to participants and beneficiaries within 90 days after participation begins; and, generally, within 120 days after the plan is subject to the act, it must be filed with the Department. ERISA established fiduciary standards to protect employee benefit plan participants and beneficiaries from plan mismanagement. The act defines a fiduciary as anyone who exercises discretionary control or authority over the management of a plan or renders investment advice to a plan. Generally, these standards require fiduciaries to act with care, skill, prudence, and diligence in investing plan assets and to manage plan assets solely in the interest of plan participants and beneficiaries. Although health plans are not required to reserve sufficient funds to pay benefits as prescribed by ERISA’s funding standards for pension plans, and self-funded plans are not directly impacted by funding requirements states impose on insurers, plan fiduciaries are required to manage plan assets, including employee contributions, in the best interest of the participants and beneficiaries. ERISA’s administration and enforcement provisions describe the remedies available to participants and beneficiaries for violations of the act’s requirements. Welfare benefit plans covered under the law must have established written procedures for filing a claim, and the beneficiary must be informed of these procedures. When a claim is denied, employee benefit plans are required to provide participants and beneficiaries written notice setting forth the specific reason for the denial and to afford them a reasonable opportunity for a full and fair review by the fiduciary of the decision denying the claim. If the beneficiary disagrees with the final decision, ERISA allows him or her to sue in the federal courts. The current debate concerning ERISA and health benefit plans stems primarily from ERISA’s preemption clause. This provision makes it possible for employers to provide employee benefits largely free from potentially burdensome and conflicting state regulation. Because ERISA left regulating the insurance industry to the states, however, its impact achieved great significance only as the result of the growth of self-funded health plans. The relevant ERISA language has been recognized as among the most complex and confusing in the federal code. It includes three significant clauses: the preemption clause, the saving clause, and the deemer clause. The preemption clause provides that ERISA supersedes any and all state laws that “relate to” any employee benefit plan. The saving clause, consistent with long-standing national policy, provides nonetheless that ERISA will not be construed to exempt or relieve any person from any state law regulating insurance. Therefore, a state insurance law may relate to employee benefit plans but nonetheless not be preempted by ERISA. Finally, the deemer clause narrows the possible scope of the saving clause by providing that no employee benefit plan will be deemed an insurer or in the insurance business for the purpose of any state law purporting to regulate insurance. The result is to restrict the extent to which state insurance regulation can affect, or serve as a pretext for regulating, employee benefit plans. As discussed more fully below, the phrase “relate to” has been the source of much legal dispute in part because of the lack of a clear legislative record on congressional intent. Original versions of the legislation passed by the House and the Senate did not include this sweeping preemption language. Instead, both the House and Senate versions had preemption language that would have limited state regulation related only to the specific provisions of the respective bills. The conference report provides little guidance on interpreting the final language. Given the sparse discussion in the conference report, particularly with the evolution of employee benefits plans since ERISA’s passage, courts have played a major role in defining the scope and extent of ERISA preemption. Although many states have tried to narrow the scope of ERISA preemption to gain more flexibility in regulating employer health plans, Hawaii is the only state that has received an exemption from ERISA. This exemption, enacted in 1983, has allowed Hawaii to enforce a mandate requiring all employers to provide employees a standard health package and pay for 75 percent of the premium. However, congressional approval of Hawaii’s ERISA exemption was due in part to the fact that Hawaii had enacted comprehensive health care reform concurrently with the original federal passage of ERISA. The conference report explicitly stated that the Hawaii exemption was not to “be considered a precedent with respect to extending such amendment to any other state law,” and the Congress has not approved any state ERISA exemption requests since Hawaii’s. The original ERISA preemption language was sufficiently ambiguous that the courts have had to define its scope. To a large degree, these court cases have attempted to delineate how closely state laws must relate to employer health plans to be preempted. Other major court decisions have addressed the ERISA appeals requirements for denied claims and the ability of employers to reduce benefits that are covered in their health plan. In its seminal case, Shaw v. Delta Airlines, a unanimous Supreme Court relied on the dictionary meaning of “relate” and ERISA’s legislative history to hold that a law relates to an employee benefit plan “if it has a connection with or reference to such a plan.” However, the Court indicated that “ome state actions may affect employee benefit plans in too tenuous, remote, or peripheral a manner to warrant a finding that the law ’relates to’ the plan.” By way of example, the Court cited a 1979 case upholding garnishment of a spouse’s income to pay child support but expressly refused to indicate where to draw the line between state actions that are preempted and those that are not. Two years later, in Metropolitan Life Ins. Co. v. Massachusetts, a unanimous Supreme Court identified a crucial distinction under ERISA between the treatment afforded employee benefit plans that are self-funded and those that are insured. At issue in the Metropolitan case was the effect of ERISA on a so-called mandatory benefit law under which Massachusetts required health insurance in the state to include certain minimum coverage for mental illness. Noting the dearth of discussion about the saving clause in the legislative history, the Court observed that the wording of the saving clause did not change during the conference, although its prominence was certainly enhanced when the preemption clause was significantly broadened. The Court went on to construe the saving clause broadly to permit states to enforce such laws against insurers even though this would impact the employee benefit plans that they insure. The Court noted that its holding resulted in a distinction between insured and self-funded plans, with the former being indirectly subject to state regulation, such as mandated benefit laws, and the latter escaping such regulation. This distinction provides an incentive for employers, particularly those operating in more than one state, to self-fund because it frees them from state laws that would otherwise affect them. Two years later, in Pilot Life Ins. Co. v. Dedeaux, a unanimous Supreme Court explained that although the saving clause was to be interpreted broadly, it could not save from preemption state laws that conflict with substantive provisions of ERISA. The effect was that common law tort and contract causes of action seeking damages for improper processing of an employee benefit plan claim are preempted. In Pilot Life, an employee sought to recover damages from the insurer that had, on the behalf of his employer, denied him benefits under an employee welfare benefit plan. The employee based his claims on state law, and the Court held that the civil enforcement provisions of ERISA were the exclusive means for employees to seek such recoveries. Because the state law claims conflicted with ERISA’s civil enforcement provisions, they were not saved from preemption by the saving clause. Relying on Pilot Life, ERISA has been held to protect third parties in addition to insurers from tort liability and other state claims when they are performing services on behalf of an employee benefit plan. For example, in Corcoran v. United Healthcare, Inc., a utilization control organization disagreed with a pregnant employee’s attending physician and indicated that, if she remained in the hospital as her physician advised, her health plan would not cover the costs. Although the employee could have remained in the hospital at her expense, she left the hospital to receive nursing care at home, which the utilization control organization indicated would be covered. While the employee was at home when the nurse was not present, the fetus went into distress and died. In response, the employee brought a wrongful death action against the utilization control organization. While mindful that its interpretation left a gap in the remedies for protecting plan participants and beneficiaries, the Court held that the case was controlled by Pilot Life, and the wrongful death claim was preempted. The Court noted that fundamental changes in employee benefit plans since ERISA’s passage may indicate a need to reevaluate protections but acknowledged that such a task falls to the Congress and not the courts. Most recently, the Supreme Court issued its decision in New York State Conference of Blue Cross & Blue Shield Plans v. Travelers Ins. Co. This decision did not delineate fully between state actions that are preempted and those that are not but indicated that courts may approve state actions that do not conflict with ERISA’s underlying objectives or impact too greatly on employee benefit plans. The case reached the Supreme Court as a result of a conflict between federal circuits over the extent to which states can impose hospital surcharges on employee benefit plans. The Third Circuit had upheld a New Jersey hospital tax used to compensate hospitals with higher shares of indigent and Medicaid patients; the Second Circuit had rejected New York’s system of imposing hospital surcharges on the basis of whether the care was financed by commercial insurers, HMOs, or Blue Cross & Blue Shield plans as a violation of ERISA. Reversing the Second Circuit in a unanimous decision, the Supreme Court elaborated upon its holding in Shaw that “relate to” means “has a connection with or reference to” and held that New York’s hospital surcharge system was not preempted by ERISA. Because New York’s law was not directed specifically at employee benefit plans, the Court concluded that its hospital rate system had no reference to ERISA. Acknowledging that “connection with” was no more helpful than “relate to” in defining ERISA preemption, the Court reviewed the legislative history and ERISA objectives. The Court found that its basic thrust was to permit the nationally uniform administration of employee benefit plans by eliminating conflicting state regulation and reiterated that state laws mandating benefits or otherwise directly regulating the content or administration of plans are preempted. However, the Court distinguished New York’s system from these preempted state laws because the New York law’s purpose was to assist Blue Cross & Blue Shield rather than to regulate the content or administration of employee benefit plans. While conceding that economic impacts alone could in some cases sufficiently trigger ERISA preemption, the Court held that New York’s hospital surcharges had only an indirect economic influence on employee benefit plans and therefore was not preempted. In the wake of the Court’s ruling in Travelers, states are likely to perceive that they have more options and greater flexibility than previously recognized. The case suggests that state laws affecting employee benefit plans will have to be judged individually on each instance’s facts and circumstances. The nature and magnitude of the impact on employee benefit plans of each state law at issue will determine the outcome. Where the state law does not conflict with ERISA objectives, it should survive legal challenge. The Department of Labor and the Internal Revenue Service (IRS) have primary responsibility for enforcing ERISA requirements. Labor’s Pension and Welfare Benefits Administration (PWBA) enforces ERISA’s fiduciary requirements, which ensure that private pension and welfare benefit plans operate in the best interests of plan participants and beneficiaries, and reporting and disclosure requirements, which ensure that plans provide financial and other information to the federal government and plan participants and beneficiaries. IRS enforces ERISA’s participation, vesting, and funding requirements for pension plans. Since December 1986, PWBA’s ERISA enforcement strategy has focused on investigating “significant issue” cases with a high potential for fiduciary violations or other imprudent management practices. Although PWBA has also emphasized investigations of multiple employer welfare arrangements (MEWA), it has focused less on single employers that self-fund their health benefit plans. The goal of PWBA’s enforcement strategy is to achieve the greatest possible ERISA compliance by using resources effectively. PWBA believes that investigations of significant issue cases have a broader impact than investigations of individual cases because they focus on financial institutions and service providers that usually serve many plans and many participants. Thus, when a fiduciary violation by a financial institution or service provider is corrected, dollar recoveries and the number of plans and participants involved are typically larger than when a violation by an individual plan is corrected. As of May 1994, PWBA had about 400 enforcement staff working on investigations of employee benefit plans (both pension and welfare benefit plans) covered by ERISA. At that time, over 720,000 private pension plans and 4.5 million welfare benefit plans were subject to title I of ERISA and Labor regulation. These plans had about $2.5 trillion in assets and covered about 200 million participants. PWBA allocates at least 50 percent of its enforcement resources to significant issue cases, with no less than 20 percent spent on either financial institution or service provider cases. PWBA devotes the remaining resources to investigating general cases. PWBA officials could not estimate the number of enforcement staff positions dedicated specifically to health benefit plan activities. PWBA’s 1994 guidance to its 10 area offices required them to balance types and sizes of plans selected for general investigations, with a general rule that no more than 5 percent of all such cases involve plans with fewer than 50 participants. PWBA uses several methods to identify financial institutions, service providers, and pension and welfare plans for investigation. The methods include referrals from IRS and other agencies, complaints from participants and beneficiaries, manual review of financial and other information on plans’ annual Form 5500 series reports, spin-offs from other investigations, special area office projects, and computer targeting. The computer targeting programs search automated Form 5500 series report information for characteristics that PWBA believes indicate a high potential for ERISA violations. Generally, the targeting programs are used to identify pension and welfare plans for investigation, although some programs can be used to identify financial institutions and service providers. In addition, the Department told us that PWBA provides technical assistance to beneficiaries and the general public. For example, PWBA may determine that an individual contacting the Administration is inappropriately being denied health benefits to which he or she is entitled and has attempted to obtain. In these situations, PWBA may intervene to assist the individual in obtaining coverage or payment of a specific benefit. PWBA also ensures compliance with provisions of the Consolidated Omnibus Budget Reconciliation Act of 1986 that provides for the continuation of group health coverage for employees and family members whose coverage would otherwise be discontinued in cases of terminated employment, death of an employee, or divorce from a covered employee. PWBA officials told us that although they have been concerned with single employers that offer self-funded health plans for several years, they have placed more emphasis on investigating MEWAs. A MEWA is an ERISA welfare benefit plan or other arrangement established or maintained to provide benefit coverage to the employees of two or more employers. Their promoters often represent to employers and state regulators that MEWAs are employee benefit plans covered by ERISA and as such are exempt from state insurance regulation under ERISA’s preemption provision. By avoiding state insurance reserve, contribution, and other requirements applicable to insurance companies, MEWAs can often market insurance coverage at substantially lower rates. This makes MEWAs an attractive alternative for small businesses that find it difficult to get affordable employee health care coverage. In practice, some MEWAs have been unable to pay claims due to insufficient funding and inadequate reserves, or, in the worst situations, they were operated by people who drained a MEWA’s assets through excessive fees and embezzlement. In 1992, we reported that these problems were widespread. According to state insurance officials, between 1988 and 1990, MEWAs left at least 398,000 participants and their beneficiaries with over $123 million in unpaid claims and many other participants without insurance. Further, over 600 MEWAs failed to comply with state insurance laws, and some violated criminal statutes. Recognizing that it was both appropriate and necessary for states to establish, apply, and enforce state insurance laws concerning MEWAs, the Congress amended ERISA in 1983 to provide an exemption to ERISA’s broad preemption provision for regulating MEWAs under state insurance laws. Since the late 1980s, Labor has acted to alleviate MEWA problems. In May 1990, the Secretary of Labor announced a program to improve MEWA enforcement efforts. The program included distributing to each state, on a quarterly basis, copies of Labor’s advisory opinions; training state and federal officials; sharing information on investigations; and developing technical assistance material and reviewing information reported by plans to the IRS to determine the feasibility of providing the states a list of MEWAs. Labor also increased its investigations of MEWAs, from 30 in December 1989 to 86 in September 1991. Recently, PWBA officials told us that about 40 percent of their current health benefit investigations involved MEWAs. PWBA data on MEWA investigations showed a total of 91 MEWA investigations pending (70 civil cases and 21 criminal cases), and PWBA had recovered from MEWA cases about $3.9 million in prohibited transactions reversed and plan assets restored in fiscal year 1994. This total has declined considerably from MEWA investigative efforts in 1993, when PWBA recovered about $6.3 million. Statistics on PWBA’s investigations of single-employer self-funded health plans are unavailable. When PWBA identifies violations, ERISA authorizes the Department of Labor to assess penalties against the violators. Labor may assess a penalty of up to $1,000 per day against a plan administrator who fails or refuses to file a Form 5500 series report or whose report is rejected for lack of material information. When PWBA finds that welfare or pension plans that do not qualify for tax exemption have violated ERISA’s prohibited transaction requirements, Labor may assess parties in interest (such as fiduciaries, employees, persons who provide plan services, employers or employee organizations whose members are covered by a plan, or others) a penalty up to 5 percent of the prohibited transaction and up to 100 percent if the transaction is not corrected within 90 days. Labor must, with certain exceptions, assess a penalty against a fiduciary or any person who knowingly participated in a fiduciary breach that occurred or continued after December 19, 1989. The fiduciary penalty is equal to 20 percent of the recovery amount agreed to in a settlement with Labor or contained in a court order. In addition, Labor investigates criminal charges for willful violations of ERISA reporting and disclosure provisions. Upon conviction, a person can be fined a maximum of $5,000 or imprisoned for up to 1 year or both; except that, in the case of such violation by a corporate entity, the fine imposed upon the person is not to exceed $100,000. The Department of Labor obtains information on the number and characteristics of self-funded health plans from the annual Form 5500 returns that employee benefit plans file with the IRS. However, PWBA officials told us that the Forms 5500 are a poor source of data on self-funded plans because they do not clearly define self-funded plans and the data contain many errors. Further, Labor officials question the accuracy of other sources of information on the number of self-funded plans. The McCarran-Ferguson Act, passed in 1945, established a statutory framework whereby responsibility for regulating the insurance industry was left largely to the states. ERISA’s preemption provision is consistent with this arrangement. The Supreme Court has recognized, however, that this effectively creates a dual system, with states regulating fully insured plans indirectly through their regulation of insurance but not self-funded plans. State insurance departments use different enforcement activities to regulate the insurance industry than the Department of Labor uses to enforce ERISA’s requirements on self-funded employer health benefit plans. The major responsibilities of state insurance departments typically include licensing insurance companies and the agents who sell insurance to ensure that companies are financially sound and reputable and that agents are qualified; setting standards for and monitoring the financial operations of insurers to determine whether they have adequate reserves to pay policyholders’ claims; reviewing and approving rates to ensure that they are both reasonable for consumers and sufficient to maintain solvency of insurance companies; reviewing and approving insurance policies to ensure that they are not vague or misleading and meet state requirements, such as mandatory benefit provisions; and monitoring insurers’ actions to ensure that they are not engaging in unfair business practices or otherwise taking advantage of consumers and assisting consumers by investigating their complaints, answering questions, and conducting educational programs. As previously noted, under ERISA, only the fiduciary and reporting and disclosure standards apply to welfare benefit plans, including health plans. The participation, vesting, and funding requirements apply only to pension plans. Major differences in Labor’s and states’ enforcement activities are in financial regulation of employer health plans, reporting and disclosure requirements, and the handling of consumer complaints. The principal responsibility of all state insurance departments is to protect consumers by ensuring that insurance companies comply with minimum solvency standards. ERISA, however, does not require health benefit plans to satisfy any solvency standards. According to the National Association of Insurance Commissioners (NAIC), ERISA does not set standards to regulate the continued solvency of health plans once they begin operation. As a result, NAIC believes that employees covered under self-funded health plans are vulnerable to plan mismanagement and a plan’s intentional abuse of its discretion because state solvency standards are preempted for self-funded plans. NAIC also points out that while ERISA provides termination insurance for defined benefit pension plans, no similar insurance exists for health plans. Generally, insured health plans are protected by state guaranty funds, but single-employer, self-funded health plans are not. NAIC states that, as a consequence, participants and beneficiaries of self-funded health plans have few avenues of redress against an insolvent plan other than to join the bankrupt employer’s other creditors to pursue the firm’s remaining assets. Further, NAIC contends that participants in an insured single-employer health plan enjoy the benefits of solvency oversight and insolvency protection. NAIC believes this disparity likely is the unintended consequence of ERISA’s failure to regulate the content of employer benefit plans and the fact that ERISA exempts the insurance industry from state purview. NAIC concluded that if an employer offers its employees an insured health benefit plan, the plan’s contents are subject to the requirements of state law, including solvency requirements. However, employees covered by a self-funded plan would not be subject to these requirements. ERISA does not require administrators of single-employer self-funded health plans to submit plan disclosure documents to any administrative agency for review, according to NAIC. Moreover, ERISA allows the plan administrator to distribute a summary of material modifications to participants and beneficiaries 210 days after the end of the plan year in which the changes were adopted. In contrast, according to the NAIC, state insurance laws typically require that single-employer insured health plans submit plan policy forms to the state insurance departments for review and approval. Most states also require insured health plans to promptly notify participants and beneficiaries of changes to the plan. Another significant difference in state insurance department enforcement activities and Labor’s is in handling consumer complaints. Labor’s approach to assisting individual participants with health plan complaints involves informing participants and beneficiaries of their rights under ERISA and providing general information about how the law may apply to their situations. Generally, Labor’s investigations of employer health plans are broader in scope than individual consumer complaints and involve financial institutions and service providers. In contrast, state insurance departments actively investigate consumers’ complaints of high-pressure sales practices, improperly denied claims, unfair discrimination, and improper denial of coverage. Also, most states perform market conduct exams to review the marketing, underwriting, rating, and claims payment practices of health insurers. In addition, according to NAIC, ERISA does not ensure participants and beneficiaries in health benefit plans an unbiased and independent review process. Although ERISA requires that all single-employer health plans, whether self-funded or insured, provide a mechanism to permit participants and beneficiaries to appeal a plan’s denial of a claim, the review may be based upon the written record and conducted by the same plan administrator who denied the claim. In comparison, NAIC states that participants and beneficiaries of single-employer insured health plans have access to state insurance departments in which participants and beneficiaries can obtain an independent and informal review of their complaints. For example, NAIC reported that in the first 9 months of 1993 the Wisconsin insurance department handled 2,438 complaints relating to insured single-employer health plans and recovered $485,580. Although disparate sources of federal government information characterize components of the U.S. health care market, no database accurately portrays the number of individuals enrolled in ERISA plans or the number of individuals enrolled in self-funded plans. Therefore, we estimated these numbers using several data sources. This required (1) estimating the number of individuals with employer-based health coverage, (2) estimating the number of individuals enrolled in ERISA health plans (subtracting coverage provided by government and church employers from total employer-based coverage), and (3) incorporating data from other sources to estimate how many individuals are enrolled in self-funded ERISA health plans. Thus, on the basis of different assumptions, we estimate that between 106 and 114 million Americans are enrolled in ERISA health plans. Of these, between 41 million and 47 million individuals, representing 16 to 18 percent of the U.S. population, are enrolled in self-funded health plans. Our estimate is that 44 million individuals, 17 percent of the population, are enrolled in self-funded ERISA plans. The Bureau of the Census’ Current Population Survey (CPS) provides data on the source of health insurance coverage, or lack thereof, for all Americans. The survey asks questions about health insurance coverage at any time during the previous year. For the March 1994 survey, Census scientifically selected about 57,000 households and weighted the results to represent the whole nation. As shown in table III.1, most Americans receive coverage through their employment or from government programs like Medicare and Medicaid. Individuals (in millions) Total (U.S. population) Because government- and church-sponsored employee plans are not ERISA plans, the number of individuals enrolled in ERISA plans is a subset of those who have employer-based coverage. Therefore, to estimate the number of individuals enrolled in ERISA plans, we used the number of individuals receiving health coverage through private-sector employers and government or church employers. Of the 140 million individuals with employer health coverage in 1993, the CPS reported that nearly 73 million individuals worked for a private employer, 16 million worked for a government employer, and 51 million did not indicate a category of employment. Most of these 51 million unclassified individuals (82 percent) were children or spouses of workers with employer coverage. We allocated these individuals to the related worker’s employment category. The remaining 9 million individuals were simply allocated proportionately to either private or government coverage. As shown in table III.2, we estimate that nearly 114 million individuals received health coverage through a private-sector employer. Initial CPS data (before allocation of unclassified individuals) (82 percent) (18 percent) Estimates (after allocation of unclassified individuals) (81 percent) (19 percent) In some cases, individuals may have been employed under one category but received coverage under another category. For example, a person may work for a private firm but receive family coverage through a spouse who is employed by the government. If government benefits are more generous, then it is likely that many spouses may elect the government-sponsored coverage rather than their own private-sector coverage. However, the CPS data do not allow us to determine when this situation occurs. Comparing the CPS data with other sources, our estimate of employer-based health coverage offered by governments is lower than expected. On the basis of enrollment in the Federal Employees Health Benefits Program (FEHBP) and employment by state and local governments, the CPS data may underestimate government-sponsored health coverage by as much as 8 million individuals. If our estimate of enrollment in government-sponsored health plans is low, then our number of individuals covered through private-sector health plans is high. In our final estimates for self-funded health plans, therefore, we tested the sensitivity of our analysis to different assumptions of employer-based coverage offered by private-sector and government employers. In addition to health plans offered by government employers, health plans sponsored by churches are also exempt from ERISA. To estimate the number of participants in church-sponsored health plans, we analyzed the CPS data to identify about 250,000 individuals who were clergy or religious workers and who received employer-based health coverage. We estimate that about 500,000 individuals and dependents receive employer coverage through church-sponsored health plans. However, this may be an underestimate because many church workers would be classified in administrative or other occupations and cannot be separately identified as church employees. Because of the relatively small number of people receiving employer coverage through church-sponsored health plans, this number is unlikely to significantly affect our final estimates of the number of individuals in ERISA self-funded plans. Thus, of the nearly 140 million individuals that the CPS reported receive employer-based health coverage, we estimate that about 114 million are enrolled in ERISA health plans, as summarized in table III.3. Number of participants (in millions) Using a similar approach with the March 1990 CPS, we estimate that of the 144 million individuals with employer-based health coverage in 1989, approximately 117 million were in ERISA health plans. Employer benefits surveys that BLS has produced indicate the percentage of plan participants with employer-based health care coverage that are enrolled in self-funded plans. A difficulty with these data, however, is that they are collected in alternating years for establishments with fewer than 100 employees and for establishments with 100 or more employees. To generate a rate for the number of individuals enrolled in self-funded plans for establishments of all sizes, we blended the results of survey years 1992 and 1993, as shown in table III.4. Establishment size (number of employees) Participants in self-funded plans (percent) The overall percentage of participants enrolled in self-funded plans, 39 percent, is the weighted average of the BLS data representing all full-time employees with health coverage. To examine the trend in enrollment in self-funded plans, we also calculated a blended self-funded rate from BLS’ survey for years 1989 and 1990. On the basis of this calculation, the percentage of participants enrolled in self-funded health plans in 1989 and 1990 was 33 percent. (See table III.5.) Establishment size (number of employees) Participants in self-funded plans (percent) To verify the accuracy of our results, we compared the BLS survey findings with other potential sources of data on the prevalence of self-funding. These included reports that health plans are required to submit to the Department of Labor as well as private employer benefit surveys. However, in most cases, these sources were incomplete or error prone. Few sources provide data for employers of all sizes, including small employers that tend less to self-fund. All employers with at least 100 employees that provide employee benefits are required to report to the Department of Labor, using a Form 5500.Unfortunately, the Department of Labor acknowledges that the data from this form are of limited value in estimating the number of self-funded health plan participants because the form is primarily designed for pension plans, the data are not reported consistently, and the data may be prone to filer errors and errors introduced in the data processing.Despite these limitations, an analysis of the Form 5500 filings by Mathematica Policy Research, Inc., for the Congressional Research Service estimated that, for employers with more than 100 employees in 1991, 42 percent of participants were in fully insured plans, 26 percent of participants were in partly self-funded plans, and 32 percent of participants were in fully self-insured plans. Mathematica excluded firms with 100 or fewer employees because the data were too incomplete. A 1994 survey in 10 states conducted by Rand, Inc., for the Robert Wood Johnson Foundation and the Department of Labor found results very similar to the BLS survey we used. In the 10 states it surveyed, Rand found that 41 percent of participants were enrolled in self-funded health plans. Like the BLS survey, these results included health plans of all sizes, including those with fewer than 100 employees. However, it is limited to the states included in the survey. The National Center for Health Statistics is currently using a similar survey nationwide; the results of its employer benefit survey are expected in fall 1995. Finally, several private employer benefits consulting firms have estimated the proportion of health plans offered by employers of different size categories that are self-funded. In 1993, Foster Higgins reported that, among employers with more than 500 employees, 64 percent of indemnity plans were self-funded and 62 percent of preferred provider organization (PPO) plans were self-funded. For employers with fewer than 500 employees, only 17 percent of indemnity plans were self-funded and 4 percent of PPO plans were self-funded. Similarly, KPMG Peat Marwick’s 1994 survey of employers with at least 200 employees found that 62 percent of conventional (indemnity) plans were self-funded, 63 percent of PPOs were self-funded, and 58 percent of point-of-service plans were self-funded. Peat Marwick did not survey firms with fewer than 200 employees. In both cases, these surveys only reported the percentage of plans that are self-funded, not the percentage of individuals enrolled in self-funded health plans. Because the BLS survey is the only one of these sources that reports national percentages of participants in self-funded health plans in firms of all sizes, we used this source in our final estimates of the number of Americans enrolled in self-funded health plans. Since the Rand survey also included all firm sizes (although only 10 states were surveyed), we used its slightly higher percentage of participants in self-funded health plans to examine the sensitivity of our results. We estimate that about 44 million participants (17 percent of the U.S. population) were enrolled in self-funded ERISA health plans in 1993. This estimate is calculated by multiplying the percentage of participants in self-funded plans (39 percent) in 1992-93 by the number of participants in ERISA health plans in 1993 (113.5 million). Using the same approach, we estimate that approximately 39 million participants were enrolled in self-funded ERISA health plans in 1989. Because of uncertainties in the number of enrollees in employer health plans sponsored by governments or churches, we tested the sensitivity or our estimate using a lower number of participants in ERISA health plans. By estimating that 106 million individuals participated in ERISA health plans (rather than 114 million), we calculated approximately 41 million enrollees in self-funded ERISA health plans in 1993 rather than 44 million enrollees. This represents about 16 percent of the U.S. population. We also tested our estimates using a higher percentage of participants in self-funded plans. On the basis of the 10-state survey conducted by Rand, we assumed that 41 percent of participants were in self-funded health plans (rather than the 39 percent estimated by BLS). Thus, using this assumption, we estimate 47 million enrollees in self-funded ERISA health plans (nearly 18 percent of the population) rather than 44 million enrollees. One of the most direct and quantifiable advantages that firms receive from self-funding their health plans is their exemption from state premium taxes and some other assessments paid by health insurers and HMOs. Most insurers and HMOs will pass on the costs of these taxes to their customers through higher premiums. However, their ability to do so, as well as the overall impact on an employer, depends on factors such as the competitiveness of the market, size of the employer, and insurer’s marketing strategy. As shown in table IV.1, premium taxes on commercial health insurers range from less than 1 percent to over 4 percent, with most states having premium tax rates of about 2 or 3 percent. Many states also provide exemptions or discounted tax rates for Blue Cross & Blue Shield Plans, HMOs, and locally based insurers. Blue Cross & Blue Shield plans (continued) Blue Cross & Blue Shield plans 0.75 aaTax assessed on subscriber fees. bbPay insurance commission maintenance assessment of no more than 0.1 percent of premium, with a minimum of $300. ccAdditional fee assessed for Department of Insurance operations, not to exceed 0.125 percent of receipts. ddHMOs pay franchise tax of 7.9 percent. Health insurers are also liable for paying other miscellaneous assessments collected by the states. For instance, all states maintain guaranty funds to pay outstanding claims in cases of an insurer’s failure. Every state except New York retroactively assesses insurers to finance these guaranty funds. That is, in years that moneys are drawn from the guaranty funds because of an insurance failure, insurers are assessed a fee on the basis of their market share within the state to pay for the guaranty fund expenses. States cap the maximum rate insurers may be assessed in a year, typically at about 2 percent of gross premiums. However, except in a few states where a large insurer failed, actual assessments are much lower than the maximum rate. For example, in 1990, actual assessments against life and health insurers for guaranty funds averaged about 0.15 percent. That year, 20 states (including Puerto Rico) either made no assessments or made refunds for surpluses in life and health insurance guaranty funds; guaranty fund assessments exceeded 1 percent of premiums in only three states. Another significant cost for health plans in some states results from taxes assessed on providers, such as hospitals. A 1994 survey by the American Public Welfare Association found that about half of the states have adopted provider revenue taxes. Many states have tax rates ranging from 1 to 7 percent, although New York imposes a 13-percent tax on hospital services paid by commercial insurers. In many states, these funds are popular because they can be used to receive federal matching funds for Medicaid. In other states, such as Massachusetts, these taxes are redistributed to hospitals to reimburse for uncompensated care costs. Finally, New York has used these taxes for hospital rate-setting programs and to provide a competitive advantage to Empire Blue Cross & Blue Shield because the insurer has maintained a policy of open enrollment and has thereby insured higher risk and costly individuals. Even though these taxes are imposed on providers rather than directly on health plans, they have an indirect effect on health plan costs. Hospital services provided to enrollees in self-funded health plans as well as those provided to fully insured health plan enrollees have been taxed. Because of this indirect effect on health plans, several states, including New York and New Jersey, have had their provider taxes challenged under ERISA. However, the Supreme Court ruled in the Travelers case that the New York system of hospital surcharges, which is essentially a provider tax, has too indirect an effect on health plans to be preempted by ERISA. Thus, pending further litigation, states appear to have the ability to impose general provider taxes without violating ERISA. State governments require, or mandate, that companies selling health insurance cover specified health services or the services provided by specified providers. Mandates are typically narrowly defined provisions and may be applied to commercial insurance companies, Blue Cross & Blue Shield plans, and HMOs. Mandates are often classified as treatment mandates, provider mandates, and special-population mandates. Treatment mandates require insurance companies to cover treatment for specific conditions, such as alcoholism and mental health problems, or for specific procedures, such as in vitro fertilization services. Provider mandates require payment for covered services from specific types of providers, such as chiropractors, psychologists, or optometrists. Special-population mandates require insurance coverage for defined groups, such as newborns, adopted children, or handicapped dependents. Mandated benefits are often debated, the debate usually centering on the value of mandated benefits relative to their cost. Proponents of mandates, including many consumer groups and health care providers, argue that they may (1) provide equal access to necessary services; (2) pay for themselves in the long run, especially preventive care services; (3) make certain benefits available to those who are likely to become uninsured or uninsurable; and (4) prevent some insurers from experiencing substantial adverse selection, that is, attracting individuals who have costly health conditions or are more likely to incur high health costs. Opponents, including many business groups and insurers, argue that mandates may (1) raise total health care costs and thus premiums, (2) cause employers to self-fund or discontinue their health plans, (3) interfere in the voluntary contract between insurers and employers, (4) result from political pressure from special interest groups and providers, and (5) create administrative burdens for insurers or employers operating in many states. The number and type of benefits mandates vary by state. Although analyses have shown that the total number of mandates adopted by the states exceeds 700, this overstates the scope of mandated benefits because many states have identical or similar requirements. States most frequently mandate coverage for preventive treatments like mammograms and pap smears or for treatment of mental illness or alcohol and substance abuse. In addition, states often require coverage for more common alternative providers like chiropractors and optometrists and for special populations like newborns and the handicapped. A small number of states requires coverage for more specific conditions or treatments, such as congenital defects like cleft palate or hair loss due to specific medical conditions or treatment. Table V.1 shows the number of states with specific mandated benefits as identified by NAIC and Blue Cross & Blue Shield of America. States typically mandate that insurers cover specific benefits in all plans sold, whereas some states merely mandate that insurers offer specific benefits but the insurer may also offer other plans without these benefits. In some cases, the mandates are limited to particular plans, such as HMOs or group insurance plans. Scalp hair prothesis (for alopecia areata) (continued) Continuation of coverage for dependents For provider and special-population mandates, State Legislative Health Care and Insurance Issues: 1994 Survey of Plans (Dec. 1994). In addition to treatment, provider, and special-population mandates, states have increasingly considered any-willing-provider laws that require managed care plans to accept health care providers in their networks that meet the terms and standards of the plan. According to the Group Health Association of America, of the states with any-willing-provider laws, 10 have laws that apply to all providers, 14 have laws that apply to pharmacists, 3 have laws that apply to physicians, and 4 have laws that apply to nonphysician providers. However, in most states the laws are limited because they do not apply to HMOs or to only particular types of managed care plans. The limited research on state benefit mandates indicates that they increase health care claims costs. The effect of mandates on costs is not uniform, however, since coverage for select benefits like mental health and substance abuse often accounts for a large percentage of increased claims costs. Determining the effect of a specific health insurance mandate on premiums can be difficult, in part because it is hard to assess a mandate’s effect on the overall increase in health claims and the attendant impact on health insurance premiums. Also, a mandate’s cost effect on an employer can vary sharply depending on the demographics and health needs of the employee population. In addition, it is difficult to assess the true impact of mandates when a large percentage of employer plans, both fully insured and self-funded, offer benefits similar to the more costly state mandates. Studies in several states have found that benefit mandates generally increase claims costs by 5 to 22 percent. These analyses have aggregated insurance company claims data and determined payments for mandated benefits as a percentage of total medical benefits paid. Table V.2 summarizes various state studies that have estimated the increased claims costs from mandated benefits. However, the results of these studies should not necessarily be generalized to all states, even to states with identical mandates. The reason for this is that many factors that are difficult to account for, such as provider charges and practice patterns and policy deductibles and copayment rates, may influence claims costs in each state. While the limited research generally shows that state benefit mandates increase costs, it suggests that some mandates have a greater impact on claims than others. Studies have shown that mental health, substance abuse, dental care, and maternity and neonatal care mandates are among the highest cost mandates. For example, a summary of five state studies estimated that mental health benefits added between 2.6 and 6.5 percent to health care claims. Mandates determined not to add significantly to health insurance costs include services for in vitro fertilization, acupuncture, and cleft palate, as well as services provided by chiropractors and home health nurses. It is these low-cost mandates, however, that are often cited by employers as examples of the added wasteful expense mandates cause them. Whereas the added claims costs caused by mandates may affect small businesses’ decisions on whether to offer health care coverage, larger businesses, many of which offer more comprehensive benefits regardless of mandates, tend to express concern about the added compliance costs resulting from varying mandates and limited flexibility to design their own benefit packages. However, these costs would be even more difficult to measure than the claims costs incurred by mandates. In addition to the difficulty of generalizing studies of benefit mandates to all states, the methodological problems inherent in existing studies obscure the entire cost impact of benefit mandates. These studies clearly show that benefit mandates increase claims costs to some extent, in part because the availability of insurance coverage for specific treatments will undoubtedly result in claims for those treatments. The extent of the overall increase in health claims and the attendant impact on health insurance premiums is unclear, however. Some of these methodological problems follow: Many mandate studies were conducted in the 1980s and may not reflect the trend of more individuals being enrolled in managed care plans that attempt to manage utilization of health care services. Studies may understate actual utilization due to a mandate because other related health services, although not themselves mandated, may also be provided. For example, some analysts have noted that conditions such as alcoholism may cause other medical problems such as malnutrition. Treatment for these related conditions would not be provided under the mandate and, therefore, costs related to the treatment of alcoholism may be understated. Studies may also overstate the increase in use of services. This could occur if services provided under the mandate substitute for services formerly provided through traditional coverage. In addition, many health plans may offer benefits similar to those mandated before its enactment. Thus, attributing the use of services to a mandate would overstate the mandate’s effect. Studies may not capture overall lower health care utilization if coverage is required for services delivered by a lower cost provider. Claims-based studies rely on accurate treatment coding by the provider and an insurer’s ability to isolate claims through its claims paying system. This may be difficult, especially in the case of multiple diagnoses. Studies traditionally focus on the costs of mandates, not on the benefits. Although a cost-benefit analysis would be difficult, if not impossible, to undertake, a truly rigorous analysis of state mandates would track health care utilization of a specific population over time to determine if medical interventions based on state mandates ultimately improved overall health status and avoided more costly medical interventions later on. In addition to these methodological problems, mandate studies have not conclusively shown that benefit mandates are a large burden to employers that would cause them to self-fund or not seek coverage. Many employers currently offer coverage similar to state mandates, and, for those that self-fund, mandates are commonly not a factor or are only one of several factors affecting that decision. In addition to those named above, the following individuals made important contributions to this report: Roger Thomas provided legal assistance; John Hansen and Darryl Joyce evaluated the Department of Labor’s enforcement of ERISA; and Paula Bonin provided computer programming for the analysis of the Current Population Survey. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to congressional requests, GAO provided information on the: (1) Employee Retirement Income Security Act's (ERISA) relationship to the current system of employer-based health coverage; (2) implications of the trend toward employer self-funding on the oversight of employees' health care coverage; (3) kinds of state actions preempted by ERISA; and (4) advantages of ERISA preemption to employers that offer health care coverage to their workers. GAO found that: (1) although courts have historically interpreted ERISA to broadly restrict state regulation of employer health plans, recent Supreme Court decisions may allow states greater flexibility under general health care regulation provisions; (2) self-funded employer health plans appear to be increasing, but many employers are moderating their risks by using stop-loss coverage or managed care arrangements; (3) about 40 percent of ERISA plans, which cover about 44 million people, are employer self-funded plans which states are preempted from regulating and taxing because they are not considered to be insurance; (4) other ERISA plans cover an additional 27 percent of the U.S. population; (5) states believe that ERISA impedes their ability to ensure adequate consumer protections and enact health cost reduction reforms; (6) states also believe that they should be able to tax and collect data on all health plan participants uniformly; (6) employers believe that ERISA has made it possible for them to offer their employees health care coverage tailored to their needs and thus reduce their costs; and (7) employers fear that changes to ERISA that would give states greater regulatory flexibility would increase their costs and jeopardize their ability to provide employee health coverage.
|
The nation’s interstate commercial motor carrier industry is extensive and includes a number of stakeholders, including carriers, shippers, receivers, and intermediaries. Motor carriers are companies or individuals that transport cargo using motor vehicles. According to FMCSA officials, there are about 737,000 interstate carriers registered with FMCSA. While the largest motor carriers operate upward of 50,000 vehicles, approximately 80 percent of carriers are small independent-owner operators or trucking companies, operating between 1 and 6 vehicles. Within the industry, there is a variety of types of carriers. For example, carriers are either for-hire, transporting cargo to the public on a fee basis, or private, running a vehicle fleet to transport only the company’s own products. There are also two principal classes of for-hire carriers: truckload carriers transport a single shipper’s cargo between two locations, while less-than-truckload carriers transport cargo for multiple customers that may require pick-up or delivery at multiple locations. In addition, some carriers primarily provide long-haul service—which is generally considered to be intercity service—while other carriers primarily provide service within a metropolitan region, referred to as short haul. Finally, carriers use vehicles that differ by body types—including dry trailers, refrigerated trailers, flat beds, and tank trailers—some of which can carry a variety of cargo, while others are used for only one type of cargo. (See fig. 1 for examples of vehicles.) Shippers are the cargo owners that hire the carriers to transport their cargo. For example, a shipper could be a product manufacturer that hires a carrier to transport its product from the manufacturing plant to the end customer, or a retail company that hires a carrier to pick up its finished products, such as clothing or electronics, from a seaport terminal or other location for transport to a distribution center. Receivers are those who are scheduled to receive and take ownership of the cargo. The receiver could be the customer of the shipper, such as a retail company or manufacturing plant, which takes ownership of the cargo to sell or use in a production process. Intermediaries arrange for the transportation of goods between the shippers and receivers. For example, a freight forwarder acts as an agent on the behalf of the shipper and will consolidate shipments from several shippers and then contract with carriers to transport these shipments. In addition, a freight broker will arrange the pick up and delivery of a shipper’s good by a carrier without having physical control of the cargo. Third-party logistical companies provide warehousing and supply chain services for the shippers and can arrange transportation of the shippers’ products. For example, carriers could transport cargo to a third-party distribution center, which can repackage the cargo for further distribution. Figure 2 provides a description of the steps typically involved in moving cargo through the system. The decisions of the various stakeholders—carriers, shippers, receivers, and others—direct the logistics (or operation) of the interstate commercial motor carrier industry, and consequently can affect the occurrence and amount of detention time. Each stakeholder plans and organizes its own activities within this large industry in an effort to meet its critical objectives. Carriers attempt to minimize their travel time and the miles they drive with no cargo, while at the same time meeting the needs of shippers. Shippers try to minimize the costs of their transportation needs while ensuring delivery in a timely manner. As each stakeholder makes its own logistical decisions, it faces tradeoffs in the costs and benefits of various options for how to schedule and manage resources. Given the numerous activities that need to be scheduled and coordinated within the industry on a daily basis, some level of trucker detention time is expected. If a shipper provides its own trucking services through a private fleet, one of the shipper’s goals will likely be to schedule shipments in a way that minimizes detention time and thus increases productivity. However, if, for example, an on-time pick up is a high priority for a particular shipment, then the shipper may choose to schedule a truck to arrive early, thus potentially increasing the likelihood or amount of detention time. Conversely, when a shipper uses a for-hire carrier, the shipper may be less inclined to fully take the extent of detention time into consideration when making its scheduling decisions because the shipper does not fully bear the costs of the detention time it may impose on truckers. Therefore, some detention time in the industry likely results because shippers’ and receivers’ decisions can affect the extent of detention time, but the costs of detention time are largely born by truckers. The federal role in regulating the interstate commercial motor carrier industry has changed over time, as shown in figure 3. Currently, the federal role in the industry is focused on regulating safety aspects of the trucking industry and funding transportation infrastructure, which are managed by two agencies within DOT. FMCSA’s primary mission is to prevent commercial motor-vehicle-related fatalities and injuries. It carries out this mission by issuing, administering, and enforcing federal motor carrier safety regulations—including hours of service requirements—and gathering and analyzing data on motor carriers, drivers, and vehicles, among other things. FMCSA also takes enforcement actions, and funds and oversees enforcement activities at the state level through Motor Carrier Safety Assistance Program grants. FHWA is responsible for overseeing the federal-aid highway program, which funds highway infrastructure. Within FHWA, the Office of Freight Management and Operations is tasked with promoting efficient, seamless, and secure freight flows—including freight transported by the interstate commercial motor carrier industry—on the U.S. transportation system and across the nation’s borders. The office advances its mission by building a greater understanding of freight transportation issues and trends, improving operations through advanced technologies, and educating and training freight transportation professionals. The office also conducts operational tests of intelligent transportation system technologies and promotes the development of standards for freight information exchange. Federal hours of service regulations place restrictions on operations of a property-carrying commercial motor vehicle driver by setting limits on duty periods. There are three hourly limitations for these drivers, including a 14-hour “driving window” after coming on duty following 10 consecutive hours off duty, 11 hours of which can be “driving time,” and a prohibition on driving after 60 or 70 hours of on duty time per week, with certain exceptions and exemptions. Specifically: 14-hour “driving window”: A driver is allowed a period of 14 consecutive hours after coming on duty, following 10 or more consecutive hours off duty, in which to drive. This 14-hour period begins when the driver starts any kind of work or is required to be in readiness to work, and once the driver has completed the 14-hour period, the driver cannot drive again without first being off duty for at least 10 consecutive hours. The 14-hour period covers both on duty and off duty time. 11-hour driving limit: During the 14-consecutive-hour driving window, the driver is limited to 11 total hours of actual driving time. The 11 hours can be consecutive or nonconsecutive within the 14-hour period. 60/70-hour on duty limit: A driver is required to adhere to one of two duty hours weekly limits, which is specified by the driver’s carrier. A driver may restart the 60- or 70-hour “week” by taking at least 34 consecutive hours off duty. If a driver’s carrier (employer) does not operate commercial motor vehicles every day of the week, the driver may not drive after being on duty 60 hours during any 7 consecutive days. If a driver’s carrier does operate commercial motor vehicles every day of the week, a 70-hour/8-day schedule is permitted. The driver may not drive after being on duty 70 hours in any 8 consecutive days.” A driver may restart the 60- or 70-hour “week” by taking at least 34 consecutive hours off duty. According to hours of service regulations, on duty time includes all of the time from when a driver begins work for a motor carrier or is required to be in readiness to work by that motor carrier, whether paid or not, until the time the driver is relieved from work and all responsibility for performing work. Some examples of on-duty time may include: driving time, to include all time spent at the driving controls of a commercial motor vehicle in operation; time at a plant, terminal, or other facility of a motor carrier or shipper, or on any public property, waiting to be dispatched; time loading, unloading, supervising, or attending the truck or handling receipts for shipments; and all other time in or upon a commercial motor vehicle unless the driver is resting in a sleeper berth. Federal hours of service regulations require drivers to maintain a “record of duty status”—commonly referred to as a driver’s log, daily log, or log book—either in written form or electronically using an automatic or electronic on-board recording device. Drivers must account for all hours of every day in their log, including days off, and must also have a log for each day of the last 8 days they were required to log. One means to ensure compliance with various safety requirements, including hours of service, is for authorized FMCSA and state officials to conduct driver and vehicle inspections of commercial vehicle motor carriers. During certain types of inspections, these inspectors check the drivers’ logs for compliance with hours of service regulations. If the inspector finds that a driver has not complied with the hours of service regulations, the violation can result in a driver being fined and/or being placed out of service. In addition, FMCSA ensures that motor carriers comply with safety requirements through compliance reviews of carriers already in the industry and safety audits of carriers that have recently started operations. Compliance reviews and safety audits help FMCSA determine whether carriers are complying with federal safety requirements—including hours of service regulations—and, if not, to take enforcement action against carriers, including placing carriers out of service. While there are no industry-wide data providing information on the occurrence of detention time, interviews with drivers, industry representatives, and motor carrier officials indicate that detention time occurs with some regularity. During our structured interviews with truck drivers, we found that the majority of these drivers had experienced detention time within the last month. Overall, 204 of the 302 drivers interviewed—about 68 percent—had reported experiencing detention time within the last month. Most of these drivers—178 of the 302, or about 59 percent—reported experiencing detention time within the last 2 weeks. About 11 percent of the drivers—32 drivers—reported they last experienced detention time more than 1 month ago. For those drivers that reported previously experiencing detention time, the amount of detention time ranged from less than 2 hours to over 8 hours, and occurred at a variety of different facilities, including production facilities and distribution centers. Finally, about 22 percent of drivers reported they had never experienced detention time. In addition, a number of motor carrier officials stated that their truck drivers experience detention time regularly enough to institute systems to track detention time for their individual companies. For example, officials from one company that tracks detention time noted that their drivers experienced detention time on 12 percent of deliveries over a 3-month time period. While detention time can happen to all types of carriers, several industry representatives noted that drivers of some vehicle types experience a higher degree of detention time, while drivers of other types do not typically experience as much detention time. For example, some industry representatives noted that refrigerated trailer drivers tend to experience detention time to a greater extent than others because refrigerated trailers can maintain cargo at the required temperature, and can therefore wait for cargo from other nonrefrigerated trailers to be unloaded. In some cases, drivers with refrigerated trailers have had to wait overnight in order to keep the product stored at the proper temperature until it could be unloaded the next morning. In contrast, while tanker trucks can experience detention time, some industry representatives noted that tanker truck drivers do not typically experience as much detention time as some other types of trailers. Truck drivers, industry representatives, and company officials identified several factors that can contribute to detention time. Based on our interviews, the 236 drivers that had reported previously experiencing detention time—either within the last month or more than 1 month ago— facility limitations, arriving for a scheduled pick-up and finding the product was not ready for shipment, poor service provided by facility staff, and facility scheduling practices were the most frequently cited contributing factors. Other stakeholders also cited these same factors as contributing to detention time. Facility limitations: About 43 percent of drivers reported they experienced detention time because the facilities were not adequately staffed, lacked sufficient loading and unloading equipment, or had an insufficient number of bays for loading and unloading trucks. These limitations can occur, for example, when facilities overschedule appointments for pickup or delivery or do not have enough staff or equipment to handle the number of trucks scheduled, thereby creating a backlog of vehicles that need to be loaded or unloaded. Product not ready for shipment: About 39 percent of drivers reported they experienced detention time because the product was not ready for shipment when they arrived at the facility for pick up. This could be due to a number of reasons, such as manufacturing problems that delayed the production of the finished product. Industry representatives and company officials also highlighted that fresh produce often is not ready for shipment when drivers arrive at the loading facility. For example, one reason fresh produce might not be ready for shipment is that weather, such as heavy rains, can delay harvesting and packaging of the produce for shipment before the drivers’ scheduled pick-up time. Poor service provided by facility staff: About 39 percent of drivers reported that poor service by the facility staff was the reason they experienced detention time. Some drivers stated that once they arrived at the facility, the facility staff were indifferent to the drivers’ schedules and would take their time before starting the loading or unloading process. Scheduling practices: About 34 percent of drivers reported that facility scheduling practices at some facilities led to detention time. One of these scheduling practices cited by industry representatives was a “first come, first serve” system, in which the facility loads the vehicles in the order of arrival at the facility. For example, some seaport terminals use this system, which results in drivers lining up at the gate to the terminal before the facility opens to make sure they can get their containers as quickly as possible. The time waiting at the gate is not considered detention time by the terminals. Other factors: Drivers, industry representatives, and company officials noted there are some other factors not under the control of the facility that can contribute to detention time. For example, about 6 percent of drivers we interviewed reported that the driver was responsible for the detention time due to the driver’s paperwork not being in order. In these cases, the facility would either have to push back its overall schedule, potentially impacting all truck drivers scheduled for loading and unloading at that facility that day, or have the delayed driver wait for an available opening. Some company officials also noted that loading or unloading could be delayed if the driver is not familiar with either the shipper’s facilities or its loading and unloading procedures. Two other factors cited by officials include shipping facility staff calling in sick and leaving the facility short of staff, and a breakdown in loading or unloading equipment, which can have a cascading effect on the facility’s schedule. Shippers may implement practices to reduce detention time at their facilities. Some shippers have established appointment systems, which allow the facility to better manage available bays, staff, and loading equipment. For example, one facility we visited schedules carriers to arrive every 30 minutes, with a goal of having the carrier either loaded or unloaded within 90 minutes. If a carrier misses an appointment, the facility will unload that carrier whenever possible, but will not bump another carrier that makes the scheduled appointment time. Another practice to reduce detention time is to use technology, such as improved communication and vehicle inspection technology, to improve the process. For example, some seaport terminal operators have installed video cameras at the gate to speed up the process for inspecting the cargo containers as trucks enter the facility. This practice reduces the wait time at the facility’s front gate. Detention time can impact drivers’ ability to make scheduled deliveries within the hours of service requirements by putting drivers behind schedule and reducing available driving time. For those drivers that reported previously experiencing detention time, 80 percent reported that detention time reduced their available driving time. For example, some drivers noted that since the federal hours of service regulations allow them a “driving window” of no more than 14 consecutive hours—limited to 11 hours of driving time—detention time can significantly reduce the available driving window and driving time. Therefore, if a driver experiences 6 hours of detention time, that driver can only drive for 8 hours, at most, before being required to rest for 10 hours. Some drivers noted that this could delay their next scheduled delivery and, in some cases, result in the receiver charging the driver a late delivery fee. According to industry representatives, drivers who experience detention time and lose available duty and driving time may sometimes be faced with a choice of not making their scheduled delivery time, violating the speed limit, or violating the hours of service requirements to make up for lost time. Detention time can in some cases lead drivers to operate their vehicles beyond the hours of service requirements and improperly log duty time in order to make scheduled deliveries on time. When asked how detention time impacts them, about 4 percent of drivers responded they have driven beyond the hours of service limits and misrepresented their hours in their log books. Although we did not specifically ask the question during the structured interviews, a number of drivers we spoke with stated they kept multiple log books in order to disguise incidents where they violate hours of service requirements due to detention time. Detention time can also result in lost revenue for drivers, as well as carrier companies. Based on our structured interviews, of those drivers that reported previously experiencing detention time, 65 percent reported that detention time had caused them to lose pay. According to industry representatives, the lost revenue can result from either missing an opportunity to secure another load or having to pay late fees to the receiver. Detention time has a greater potential to result in lost revenue for independent owner operators than drivers employed by carrier companies. In general, drivers that are employed by private companies are paid by the hour. Owner operators—including owner operators that are leased to carrier companies—are typically paid by the number of miles driven or by the number of loads delivered. Because the typical owner operator’s pay structure is based on actual driving time, these drivers do not get paid for time spent waiting to load or unload. In fact, drivers have an adage that says “when the wheels ain’t turning, you’re not earning.” Carrier companies have some ability to mitigate the economic effects of detention time through a variety of means, such as charging detention fees to shippers, developing relationships with customers, using efficient loading and unloading operations, and no longer providing service to customers with persistent detention time. First, according to vehicle safety association officials, larger carrier companies have the leverage to include detention fee clauses in their contracts with shippers. For example, a number of carrier companies we talked with charged detention time fees to the shippers for any time over 2 hours that their vehicle was at the facility. The detention time fee varied based on the specific contract; examples provided to us ranged from $40 to $80 per hour. Based on our structured interviews, 53 percent of drivers that reported previously experiencing detention time reported that their company collected detention fees. However, according to some carrier officials, not all carriers collect detention fees, even if provided for in the contract, due to their reluctance to charge their customers, particularly their larger customers with whom they conduct significant business. One carrier official explained that detention time is simply a cost of doing business in today’s freight environment. In addition, collecting detention time fees can sometimes be challenging if the shipper does not agree with the amount of detention time that occurred. For example, during a 3- month time period, one carrier billed over $4,300 in detention time fees but received less than $500. Second, some carriers work closely with their customers to reduce detention time. According to industry representatives, some carriers develop relationships with shippers and receivers as they make routine visits to their facilities and establish a familiarity with the process. For example, according to one motor carrier company, its work with customers to track and measure detention time information has, in many situations, resulted in some decrease in detention time. Third, some carriers use a more efficient loading and unloading operation called the “drop and hook” method, which limits detention time. Drop and hook operations prevent the driver from having to wait for a trailer to be loaded or unloaded at the shipper’s facility. The driver will arrive at the facility with an empty trailer, drop off the empty trailer, and hook up the loaded trailer. The shipper will load the cargo into the trailer prior to the scheduled pick-up time. According to company officials and industry researchers we spoke to, the drop and hook method does reduce detention time. However, according to carrier officials, drop and hook requires the carrier to invest in additional trailers. For example, one carrier that used the drop and hook method had 1.5 trailers for each tractor, resulting in additional costs to purchase and maintain the trailers. Finally, some motor carrier officials stated that if they experience significant occurrences of detention time at a particular facility, the carrier could stop providing transportation services for that client if it had sufficient business with other shippers. In addition, some larger carrier companies are better able to handle logistical challenges that could result from detention time. For example, a carrier may have one of its vehicles held up because of detention time; however, a larger carrier can adjust the schedule of other vehicles to ensure the carrier is able to meet its commitments, therefore limiting the impact of the detention time. Smaller carrier companies or independent owner operators with only a few vehicles may not be able to react in a similar manner. According to industry representatives, independent owner operators have limited ability to mitigate the economic effects of detention time. For example, some industry representatives stated that since independent owner operators that do not lease on a regular basis to carrier companies generally use intermediaries to arrange for cargo, those operators do not have established contracts with shippers and thus have less leverage to charge detention time fees. Depending on their contractual arrangements, independent owner operators that are leased by a large carrier also may not receive detention time fees, even if the motor carrier charges and collects those fees. In addition, even if an independent owner operator that is leased by a motor carrier receives detention fees from the motor carrier, the fees may not fully compensate the driver for the detention time. That is because detention time compensation typically falls short of the amount of compensation that drivers would receive when they are actually driving since most of their compensation is based on miles driven. Finally, according to an industry representative, some carrier companies opt to send drivers from leased independent owner operators—who, unlike some carrier companies’ own drivers, are not paid by the hour—to facilities that frequently cause detention time. In so doing, the motor carrier does not have to pay the driver for the time spent waiting to load and unload. Furthermore, in some cases, independent owner operators do not transport cargo to the same facilities as frequently as carrier companies, which limits a driver’s familiarly with the procedures of specific facilities and could lead to detention time. For example, according to one warehouse representative we talked with, the drivers that encounter the most detention time are associated with independent owner operators that are not familiar with the requirements and rules of the facility, which includes not having the proper paper work, enough fuel for their refrigerated trailer, or the trailer is not in the proper condition. That representative’s facility will not check in trucks that do not meet these core requirements. Finally, independent owner operators generally do not have the financial resources to purchase additional trailers to take advantage of the drop and hook method or to simply absorb the costs of detention time. Although FMCSA collects data from roadside inspections, which provides information on the number of hours of service violations, the agency currently does not collect—nor is it required to collect—information to assess the extent to which detention time contributed to these violations. In 2009, FMCSA and state officials conducted over 3.5 million roadside inspections of interstate and intrastate motor carriers and almost 6 percent of these inspections resulted in at least one out-of-service violation. As shown in table 1, according to FMCSA data, hours of service violations were among the top 10 cited out-of-service violations. Specifically, violations of all three types of hours of service requirements—the 14-hour “driving window” rule, 11-hour driving rule, and 60/70-hour weekly on-duty rule—ranked in the 10 most frequently cited types of violations. Further, according to FMCSA officials, 14-hour rule violations were the most common out-of-service violations from U.S. roadside inspections in 2009, as well as in 2007 and 2008. FMCSA officials stated that 14-hour rule violations are straightforward and easier to detect during roadside inspections compared to other types of violations, which partially explains the more common occurrence of this type of violation. While FMCSA does not collect information on what factors contribute to hours of service violations, officials and industry representatives stated that detention time could be one of many such factors. Other factors could include a driver needing to leave the property of a facility and drive to a parking or rest area a number of miles away, having already used up the available driving hours for that day. Because FMCSA does not currently collect and analyze data on the factors that contribute to hours of service violations, its ability to assess the impact of detention time on hours of service violations, which may affect driver safety, is limited. Agency officials stated that, while FMCSA does not identify the factors that contribute to out-of-service violations, including hours of service violations, during roadside inspections, inspectors may acquire some information on these factors during compliance reviews. However, agency officials also stated they do not currently have other data that would help them determine either how often detention time occurs or how often detention time contributes to drivers violating hours of service requirements. For example, driver log data that FMCSA reviews during inspections—either in hard copy or electronically—do not include or identify detention time. While drivers are not required to specifically note detention time in their log books, they must note the time they arrived and departed a facility. However, if the driver did experience some detention time with the recorded time at the facility, it does not necessarily mean that it was a contributing factor to a violation in hours of service. To make that determination, an inspector would have to ask the driver what happened. As a result, it may be difficult to link hours of service violations to detention time based solely on log book data. To date, research conducted by FMCSA has not specifically included efforts to determine the extent to which detention time occurs. Instead, FMCSA research has focused on an overview of freight movement, including identifying inefficiencies in freight transportation and evaluating safety and productivity improvements. For example, FMCSA’s Motor Carrier Efficiency Study, a 2007 Annual Report to Congress, examined the application of wireless technology to improve the safety and efficiency of trucking operations in the United States. The analysis estimated that the motor carrier industry incurs financial losses in the tens of billions of dollars per year because of operating inefficiencies, and noted that “time loading and unloading” was the most costly inefficiency identified by motor carriers. While “time loading and unloading” is a key determination for whether detention time has occurred, the study does not specifically address instances of detention time or differentiate between expected time loading and unloading, and detention time. Also, FMCSA has conducted research on hours of service and driver fatigue with many studies completed, ongoing, or planned for the future. For example, FMCSA recently completed a study examining whether additional sleep would more effectively restore driver performance compared to the current 34-hour restart provision. In addition, FMCSA officials noted that the agency has several ongoing hours of service studies. Although FMCSA does not currently have data on detention time, the agency plans to conduct three studies addressing driver fatigue, driver compensation, and detention time. First, agency officials stated that a driver fatigue study is planned for July 2011. FMCSA officials stated that as part of this study, they plan to conduct an annual driver survey on driver fatigue to obtain an understanding of the impact of changes in the commercial driver workforce to ensure safety and well-being of its members. The results will be used to develop and evaluate rules, regulations, policies, and enforcement activities for the motor carrier industry. However, while FMCSA has developed a problem statement, it has yet to finalize the details on this study’s scope and methodology. Second, FMCSA plans to conduct a study examining the impact of driver compensation, such as pay per mile, on driver safety. Finally, FMCSA has requested funding for a study on detention time, which it plans to conduct in 2012. While FMCSA officials said they plan to survey drivers on the amount of time they wait to load or unload shipments, FMCSA has to date only developed a problem statement. The purpose of the study will be to better understand the nature of the problem of detention or waiting time in the industry. Agency officials stated the study will also identify any changes in current regulations that would reduce driver wait times. In addition, officials stated they will use the prior two studies to develop the detention time study’s scope or methodology. Therefore, it is not clear whether the detention time study will address, among other things, the extent to which detention time contributes to drivers violating hours of service requirements. In addition to FMCSA’s planned studies on detention time, collecting information on the factors that contribute to detention time through driver and vehicle inspections or other means could help FMCSA determine whether detention time is a significant factor in contributing to drivers violating hours of service requirements and, consequently, whether additional federal action by DOT or Congress might be warranted to mitigate detention time as a potential safety issue. For example, FMCSA could collect this type of information through level IV special inspections, which are typically one-time examinations based on an existing or potential problem and administered for data collection purposes—such as investigating defects in brakes or intermodal equipment—typically conducted in support of a study or to verify or refute a suspected trend. In 2009, FMCSA conducted over 16,500 level IV special roadside inspections in the United States. According to agency officials, level IV inspections are effectively level I standard inspections plus some additional questions for data collection, and the actual work and resources remain the same. These types of inspections can be administered in a designated period of time, such as a 3-day period when many inspections would be scheduled to occur. In addition, FMCSA could use a study-specific data collection form to acquire information on factors contributing to hours of service violations during inspections, similar to the methodology used in an unpublished FHWA study examining the violation of hours of service requirements in relation to the origin of the load. The study used an inspection form and a data collection form to acquire additional information outside of the standard inspection. Hours of service requirements are designed to ensure that truck drivers get the necessary rest to perform safe operations, to help continue the downward trend in commercial motor vehicle fatalities, and to maintain motor carrier operational efficiencies. All three goals further FMCSA’s primary mission to prevent fatalities and injuries involving commercial motor vehicles. Therefore, information on the factors that contribute to hours of service violations could help FMCSA in developing any future policy, rules, regulations, or programs to improve commercial vehicle safety. Any federal action to address issues associated with detention time beyond hours of service requirements would require careful consideration. Since there is no current federal regulation of detention time, any potential federal action would need to be based on a full understanding of the complexities of the industry. For example, a standard definition of detention time would need to be established. However, as we have shown, there are often disagreements between shippers and carriers regarding how much detention time occurred in a particular case, so finding a commonly agreed to definition in the industry could be challenging. It would also need to be decided which stakeholders any new federal action would target since there is a wide variety of stakeholders involved. Finally, the federal government would need to evaluate whether any unintended consequences may flow from a new federal action, and if so, how to avoid or mitigate those consequences. Detention time is a complex issue involving many stakeholders. While it is not uncommon for drivers to experience detention time, there are no data available that can provide any definitive information on how often it occurs, how long detention time lasts, or what types of carriers or facilities experience the most detention time. In fact, detention time can be difficult to measure as there are different interpretations of what constitutes detention time. Detention time can be caused not by one predominant factor, but instead by a wide variety of factors, primarily related to facility operations. Furthermore, some detention time likely results because shippers’ decisions affect the extent of detention time, but the costs of detention time are largely born by truckers. While detention time can have an economic impact on drivers and carrier companies, the current federal role in the industry focuses on safety—including hours of service requirements—rather than economic regulation. FMCSA’s plans to look at detention time in upcoming studies may shed further light on the contributing factors and extent of detention time, but the agency is still in the initial planning stages and has not determined the scope of these studies. Without information on the extent to which detention time occurs and the extent to which detention time contributes to hours of service violations, FMCSA may not have key information to help reduce these types of violations. To support the primary mission of FMCSA in improving the safety of commercial motor vehicles, we recommend the Secretary of DOT direct the Administrator of FMCSA to examine the extent to which detention time contributes to drivers violating hours of service requirements in its future studies on driver fatigue and detention time, and through data collected from its driver and vehicle inspections. We provided a draft of this report to DOT for review and comment. DOT officials provided technical comments which we incorporated into the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To determine how regularly truck drivers experience detention time, the factors that contribute to detention time, and how detention time affects the interstate commercial motor carrier industry, we reviewed hours of service regulations, legislation related to the regulation of the trucking industry, and relevant Federal Motor Carrier Safety Administration (FMCSA) regulations; agency reports such as motor carrier efficiency studies; Federal Highway Administration (FHWA) freight facts and figures; and an Office of Motor Carriers study on trucking operations and hours of service. We also reviewed information about the trucking industry and driver safety overviews such as trucking logistics overviews, and background information on warehouse and distribution centers. To capture the various perspectives of the diverse trucking industry, we interviewed officials and representatives from FMCSA and FHWA, port authorities, carrier companies, trucking associations, manufacturing associations, warehouse facilities, and research firms. To conduct these interviews, we contacted officials and representatives across the country by phone, and we conducted site visits to Chicago, Illinois; Newark, New Jersey; and Port Arthur, Texas. In addition, we conducted structured interviews with truck drivers to gain a general understanding of (1) the frequency truck drivers experience detention time, (2) what truck drivers perceive to be the factors that contribute to detention time, and (3) how detention time affects them. After initially developing, reviewing, and modifying the interview questions, we conducted two pretests with truck drivers at truck stops in North Bend, Washington, and Baltimore, Maryland. The two pretests were conducted by GAO team members who approached respondents asking if they would like to answer a short questionnaire on detention time. The GAO team members asked the respondents the structured questions and noted any questions, comments, and lack of clarity to the questions on the part of the pretest respondents. The final changes made to the structured interview questions were made on the basis of observations from the pretests. A copy of the structured interview questions and results of the closed-ended questions are included in appendix II. The targeted population for the structured interviews was truck drivers. We conducted the interviews at four truck stops: Baytown, Texas; Ashland, Virginia; Walcott, Iowa; and North Bend, Washington. We chose these sites to obtain geographic dispersion and based on input from industry stakeholders. The GAO team members stationed themselves on- site where the highest volume of drivers was located, such as the front entrance. The GAO team members self-selected the respondents and therefore the results from this nongeneralizable sample cannot be used to make inferences about the population. Of the 549 truck drivers approached, 247 drivers declined to be interviewed, yielding a 55 percent response rate. Of the 302 truck drivers we interviewed, 230 identified themselves as only long-haul drivers, 21 drivers identified themselves as only short-haul drivers, 40 identified themselves as providing both long- and short-haul service, and 11 drivers did not provide a response. The structured interview of truck drivers contained a mixture of closed- ended and open-ended questions. In order to analyze drivers’ verbal responses to the open-ended questions, two analysts independently coded the responses and resolved any discrepancies in the categorization. To determine what federal actions, if any, could be taken to address the issues associated with detention time, we reviewed existing research and studies conducted by the Department of Transportation, such as FMCSA’s Motor Carrier Efficiency Study and FHWA’s report that provided an overview of the volume and value of freight flows in the United States. Further, we reviewed FMCSA’s plans for future research studies on detention time, driver fatigue, and driver compensation. In addition, we reviewed hours of service requirements; documentation on FMCSA’s enforcement of safety regulations, such as the types of driver and vehicle inspections; relevant laws and regulations related to the federal government’s role in the trucking industry, such as the Motor Carrier Act of 1935, the Motor Carrier Act of 1980, and the ICC Termination Act of 1995; and other relevant trucking industry requirements and rules, such as 49 C.F.R. part 395, and the electronic on-board recorder rule. Furthermore, we relied on FMCSA North American Free Trade Agreement Safety Statistics on out-of-service violations from roadside inspections for information on the number of hours of service violations. We did not independently verify those statistics since we reported them for contextual purposes, and they do not materially affect our findings. As such, we did not conduct a data reliability assessment of these data. Finally, we interviewed officials and representatives from FMCSA and FHWA, port authorities, carrier companies, trucking associations, manufacturing associations, warehouse facilities, and research firms to get their perspective on potential federal actions that could be taken to address detention time issues. In the last 7 days 51.0% (154) In the last 2 weeks 7.9% (24) 8.6% (26) More than 1 month ago 10.6% (32) Does not experience detention time 21.9% (66) 2. Did you collect detention fees? 35.3% (83) 62.6% (147) 2.1% (5) 3. Does the company you work for collect detention fees? 53.4% (125) 18.4% (43) 2.1% (5) 26.1% (61) 4. During your last detention time, how long did you wait from gate to gate? 5. Did you have any wait time before you got to the gate? 21.2% (49) 78.8% (182) 6. If so, what were the reasons for the wait time before you got to the gate? 7. Did you have an appointment time? 79.5% (186) 20.5% (48) 8. For the last time you experienced detention time, what was the reason? 9. For the last time you experienced detention time, how did the detention time impact you, if at all? 10. What type of freight were you hauling? 11. What type of facility were you delivering to or picking up freight from? 12. In general, does detention time impact your ability to meet federal hours of service requirements? 87.2% (204) 12.8% (30) 13. If yes, in what way does it impact? 14. Besides the reason you mentioned previously, are there other reasons for why you have experienced detention time in the past? 15. Besides the impacts you mentioned, has detention time impacted you in other ways in the past? 16. Are you an owner operator, an owner operator leased, or a company driver? 30.4% (89) 10.2% (30) 59.0% (173) 17. Are you paid according to your mileage or by a percentage, hourly, or by some other method? 64.7% (189) 25.7% (75) 2.7% (8) 6.8% (20) 18. Are you a long-haul or short-haul driver? 79.0% (230) 7.2% (21) Both long haul and short haul 13.7% (40) 19. Are you an individual or team driver? 88.0% (257) 11.3% (33) 0.7% (2) - Submit to crrier within 1(ONE CALENDAR DAY - 24 HOURS) ORIGINAL DUPLICATE - Driver retin possssion for eight d(TOTAL MILES DRIVING TODAY) VEHICLE NUMBERS - (SHOW EACH UNIT) (NAME OF CARRIER OR CARRIERS) (DRIVER’S SIGNATURE IN FULL) (MAIN OFFICE ADDRESS) (NAME OF CO-DRIVER) According to FMCSA, the regulations do not say what a log form must look like. However, it must include a 24-hour graph grid, in accordance with regulations, and the following information on each page, according to the agency: Date: Drivers must write down the month, day, and year for the beginning of each 24-hour period. (Multiple consecutive days off duty may be combined on one log page, with an explanation in the “Remarks.”) Total miles driving today: Drivers must write down the total number of miles driven during the 24-hour period. Motor coach/bus number: Drivers must write down either the vehicle number(s) assigned by their company, or the license number and licensing state for each truck (and trailer, if any) driven during the 24-hour period. Name of carrier: Drivers must write down the name of the motor carrier(s) they are working for. If drivers work for more than one carrier in a 24-hour period, they must list the times they started and finished work for each carrier. Main office address: Drivers must write down their carrier’s main office address. Signature: Drivers must certify that all of their entries are true and correct by signing their log with their legal name or name of record. Name of co-driver: Drivers must write down the name of their co-driver, if they have one. Time base to be used: Drivers must use the time zone in effect at their home terminal. Even if they cross other time zones, they must record time as it is at their terminal. All drivers operating out of their home terminal must use the same starting time for the 24-hour period, as designated by their employer. Total hours: Drivers must add and write down the total hours for each duty status at the right side of the grid. The total of the entries must equal 24 hours (unless you are using one page to reflect several consecutive days off duty). Remarks: This is the area where drivers must list the city, town, or village, and state abbreviation when a change of duty status occurs. Drivers should also explain any unusual circumstances or log entries that may be unclear when reviewed later, such as encountering adverse driving conditions. Shipping document number(s), or name of shipper and commodity: For each shipment, drivers must write down a shipping document number (such as a charter order or a bus bill) or the name of the shipper and what they are hauling. Appendix IV: North American Standard Driver and Vehicle Inspection Levels An inspection that includes examination of driver’s license; medical examiner’s certificate and waiver, if applicable; alcohol and drugs; driver’s record of duty status as required; hours of service; seat belt; vehicle inspection report; brake system; coupling devices; exhaust system; frame; fuel system; turn signals; brake lamps; tail lamps; head lamps; lamps on projecting loads; safe loading; steering mechanism; suspension tires; van and open-top trailer bodies; wheels and rims; windshield wipers; and emergency exits on buses and hazardous materials requirements, as applicable. An examination that includes each of the items specified under the North American Standard Inspection. As a minimum, level II inspections must include examination of: driver’s license; medical examinees certificate and waiver, if applicable; alcohol and drugs; driver’s record of duty status as required; hours of service; seat belt; vehicle inspection report; brake system; coupling devices; exhaust system; frame; fuel system; turn signals; brake lamps; tail lamps; head lamps; lamps on projecting loads; safe loading; steering mechanism; suspension; tires; van and open-top trailer bodies; wheels and rims; windshield wipers; emergency exits on buses; and hazardous materials requirements, as applicable. It is contemplated that the walk-around driver/vehicle inspection will include only those items that can be inspected without physically getting under the vehicle. A roadside examination of the driver’s license, medical certification and waiver, if applicable; driver’s record of duty status as required; hours of service; seat belt; vehicle inspection report; and hazardous materials requirement, as applicable. Inspections under this heading typically include a one-time examination of a particular item. These examinations are normally made in support of a study or to verify or refute a suspected trend. An inspection that includes each of the vehicle inspection items specified under the North American Standard Inspection (level I), without a driver present, conducted at any location. An inspection for select radiological shipments, which include inspection procedures, enhancements to the level I inspection, radiological requirements, and the enhanced out-of-service criteria. Select radiological shipments include only highway route controlled quantities as defined by title 49 section 173.403 and all transuranics. In addition to the name above, key contributors to this report were Sara Vermillion (Assistant Director), Amy Abramowitz, Richard Bulman, Lauren Calhoun, Delwen Jones, Sara Ann Moessbauer, Joshua Ormond, Tim Schindler, Elizabeth Wood, and Adam Yu.
|
The interstate commercial motor carrier industry moves thousands of truckloads of goods every day, and any disruption in one truckload's delivery schedule can have a ripple effect on others. Some waiting time at shipping and receiving facilities--commonly referred to as detention time--is to be expected in this complex environment. However, excessive detention time could impact the ability of drivers to perform within federal hours of service safety regulations, which limit duty hours and are enforced by the Federal Motor Carrier Safety Administration (FMCSA). This report discusses: (1) How regularly do truck drivers experience detention time and what factors contribute to detention time? (2) How does detention time affect the commercial freight vehicle industry? (3) What federal actions, if any, could be taken to address detention time issues? GAO analyzed federal and industry studies and interviewed a nongeneralizable sample of truck drivers, as well as other industry stakeholders and FMCSA officials. While there are no industry-wide data on the occurrence of detention time, GAO interviews with over 300 truck drivers and a number of industry representatives and motor carrier officials indicate that detention time occurs with some regularity and for a variety of reasons. About 59 percent of interviewed drivers reported experiencing detention time in the past 2 weeks and over two-thirds reported experiencing detention time within the last month. Drivers cited several factors that contribute to detention time. About 43 percent of drivers identified limitations in facilities, such as the lack of sufficient loading and unloading equipment or staff. These limitations can occur when facilities overschedule appointments, creating a backlog of vehicles. Another factor cited by about 39 percent of drivers was the product not being ready for shipment. Other factors include poor service provided by facility staff, facility scheduling practices that may encourage drivers to line up hours before the facility opens, and factors not under the control of the facility, such as drivers filing paperwork incorrectly. Some facilities are taking steps to address these factors, such as using appointment times. Detention time can result in reduced driving time and lost revenue for drivers and carriers. For those drivers that reported previously experiencing detention time, about 80 percent reported that detention time impacts their ability to meet federal hours of service safety requirements--a maximum of 14 hours on duty each day, including up to 11 hours of driving--by reducing their available driving time. About 65 percent of drivers reported lost revenue as a result of detention time from either missing an opportunity to secure another load or paying late fees to the shipper. Some practices can mitigate these economic impacts, such as charging detention time fees and developing relationships with facilities so drivers become familiar with a facility's process. According to industry representatives, carrier companies are better positioned than independent owner operators to use such practices and are better able to handle logistical challenges that may result from detention time. While FMCSA collects data from drivers during roadside inspections, which provide information on the number of hours of service violations, the agency currently does not collect--nor is it required to collect--information to assess the extent to which detention time contributes to these violations. Agency officials stated that FMCSA does not identify the factors that contribute to hours of service violations, and detention time could be just one of many factors. To date, FMCSA research has focused on an overview of freight movement, but not the extent to which detention time occurs or how it may impact hours of service violations. FMCSA plans to conduct a 2012 study to better understand the extent to which detention time occurs. Obtaining a clearer industry-wide picture about how detention time contributes to hours of service violations could help FMCSA determine whether additional federal action might be warranted. However, any additional federal actions to address issues associated with detention time beyond hours of service would require careful consideration to determine if any unintended consequences may flow from federal action to regulate detention time. GAO recommends that FMCSA examine the extent to which detention time contributes to hours of service violations in its future studies on driver fatigue and detention time. We provided a draft of this report to DOT for review. DOT officials provided technical comments, which we incorporated into the report, as appropriate.
|
Pesticides are used extensively in agricultural production throughout the world to control or kill insects, fungi, or other pests and to increase crop yields. But pesticides can also harm human health and the environment. As the types and number of pesticides have grown over the past 30 years, their effects on health and the environment have come under closer scrutiny. And as scientific evaluation has shown that certain food-use pesticides can cause cancer, birth defects, and other disorders, the use of some of these pesticides has been banned, or canceled, in the United States. But canceling a pesticide may not eliminate all of its risks, particularly if its residues persist in the environment or appear on imported foods. Hence, federal food safety agencies must decide not only which pesticides to cancel but also how to regulate the residues of canceled pesticides that continue to appear in foods. These decisions made after a pesticide has been canceled can have important health and economic implications. Federal responsibility for protecting public health and the environment from unsafe pesticides is shared by the Environmental Protection Agency (EPA), the Food and Drug Administration (FDA), and the U.S. Department of Agriculture (USDA). Broadly speaking, EPA sets standards for pesticide safety, which FDA and USDA monitor and enforce. Under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), EPA registers (licenses) pesticide products for use on specific crops grown in the United States. EPA may register a pesticide if it determines, among other things, that the pesticide will perform its intended function without causing “unreasonable adverse effects on the environment.” FIFRA defines this term to mean “any unreasonable risk to man or the environment, taking into account the economic, social, and environmental costs and benefits” of the pesticide’s use. EPA may register a single pesticide for multiple food and nonfood uses. Residues of a pesticide used on a food or feed crop can remain on the food or feed and be ingested with it. EPA is required under the Federal Food, Drug, and Cosmetic Act (FFDCA) to establish a tolerance—or an exemption from a tolerance—for any registered use of a pesticide on food or animal feed. A tolerance specifies the maximum amount of the pesticide’s residue that may legally remain in or on the food or feed. A single pesticide registered for multiple food or feed uses must have multiple tolerances. FDA and USDA monitor pesticides’ residues in foods and feed sold in interstate commerce using the tolerances established by EPA. If food or feed products contain residues of pesticides that have not been granted tolerances (or exemptions from tolerances) or if residues exceed tolerances, foods are considered adulterated and are subject to seizure. FDA monitors most foods sold in interstate commerce except meat, poultry, and certain egg products, which are monitored by USDA. States are responsible for monitoring foods that are not sold in interstate commerce. After registering a pesticide, EPA continues to evaluate its safety, principally through two major programs—reregistration and special review. If EPA finds that a pesticide poses unreasonable risks to humans or the environment, the agency may cancel the registrations for some or all of its uses. A manufacturer may also voluntarily cancel a pesticide’s registrations. Under amendments to FIFRA, enacted in 1972 and 1988, EPA is reevaluating and reregistering thousands of previously registered pesticide products on the basis of current scientific standards. In implementing this mandate, EPA is focusing primarily on about 600 active ingredients that are the main components of the individual pesticide products. Although, as we reported in May 1993, EPA will not complete its reregistration program for many years, many manufacturers have already canceled the registrations for thousands of pesticide products containing hundreds of active ingredients rather than pay the fees or develop the data required to support the products’ reregistration. When new evidence indicates that a registered pesticide may pose a significant health or environmental risk, EPA conducts an extensive analysis, known as a special review, to determine whether the risks to human health or the environment exceed the benefits of continued use. Through this process, EPA has determined that a number of active ingredients—and the products containing these active ingredients—pose unreasonable risks to human health or the environment and has therefore canceled some or all registrations for the use of these pesticides. In many cases, a manufacturer facing special review has voluntarily canceled a pesticide’s registrations. In general, after a pesticide’s registration for a specific use has been canceled, the pesticide may no longer be sold or distributed for that use in the United States. However, as long as the pesticide’s tolerances remain in effect, foods containing residues of the pesticide may be sold in the United States. FDA and USDA may not classify such foods as adulterated and may not seize and remove them from commerce. Neither FIFRA nor FFDCA requires EPA to revoke a pesticide’s tolerances after it has canceled the pesticide’s registrations. However, in 1982, after consulting with FDA and USDA, EPA adopted a policy on revoking the tolerances for canceled pesticides in which it stated that “when a pesticide’s registration for a food or feed use is canceled because of a concern about the safety of the pesticide, the associated tolerance . . . is no longer justified and logically should be revoked.” EPA added that “the agencies are concerned that having formal tolerances remaining in effect for canceled pesticides may serve to condone use of these pesticides in this country and/or in or on commodities imported from foreign countries.” Although most pesticides break down fairly quickly in the environment, some pesticides degrade very slowly and persist in the environment long after their use has ended. Hence, residues of these pesticides may be unavoidable in certain foods. To provide standards for regulating these unavoidable residues in foods, EPA recommends action levels, which FDA and USDA have agreed to establish for use in monitoring and enforcement, in accordance with EPA’s 1982 policy. In recommending action levels, EPA’s policy requires the agency to assess health risks as well as the extent to which residues are unavoidable in foods and to periodically lower action levels as residues of canceled pesticides decline in the environment. Like a tolerance, an action level specifies the maximum amount of a pesticide’s residues that may be allowed in or on a food or feed. However, an action level is established only for residues that are considered unavoidable in a certain food. An action level may be established to take the place of a tolerance that has been revoked. An action level may also be established for a pesticide’s unavoidable residues in a food for which a tolerance was never set because the pesticide was never registered for use on that food. FDA issued a notice in the Federal Register (55 Fed. Reg. 14359, Apr. 17, 1990) explaining how the agency would use action levels. FDA stated that, according to FFDCA, “in the absence of a tolerance, any amount of a pesticide residue in a food or feed is unsafe and therefore renders the food or feed adulterated.” But when a food or feed is unavoidably contaminated with certain persistent pesticides that do not have tolerances, FDA said that it would use action levels to provide guidance for determining when enforcement action was warranted. Most of the action levels that EPA has proposed or recommended have been for a group of chlorinated compounds—including DDT, chlordane, and dieldrin—that were widely used in U.S. agriculture during the 1950s and 1960s. Because these compounds were later found to pose unacceptable chronic health risks to humans and to affect reproduction and cause birth defects in wildlife, most of their registrations were canceled during the 1970s. However, unlike most pesticides, these compounds have not readily broken down. Today, they are still found in soil, sediment, and water. Chlorinated compounds are not highly concentrated in plants, but they are accumulated in other organisms, particularly in fish, which are at or near the top of the aquatic food chain. Unlike the herbivorous land animals eaten by humans, fish are often predators. When they prey on other aquatic animals, they may ingest and accumulate compounds that their prey have already accumulated. According to EPA, aquatic organisms may accumulate environmental contaminants in concentrations up to 1 million times greater than are found in the surface water from which the organisms are taken. Although these chlorinated pesticides were never registered for use on fish, they have been found in fish for decades, largely because agricultural runoff transported the pesticides to the nation’s rivers and lakes. Since these pesticides did not have tolerances for fish, FDA established action levels as guidelines for determining when enforcement action was warranted. According to EPA, most foods and feeds either contain no detectable residues of these canceled pesticides or contain residues that are well below the recommended action levels. Therefore, EPA believes that the dietary risk from these canceled pesticides in most foods is low. But because of the relatively high potential for these persistent pesticides to be concentrated in fish, health risks from dietary exposure to these canceled pesticides are greater in fish than in other foods. When EPA decides to revoke a pesticide’s tolerances, it first verifies that all of the registrations associated with these tolerances have been canceled. It then reviews monitoring data to determine whether and to what extent residues of the canceled pesticide remain in foods and whether action levels are needed to replace the existing tolerances. EPA also analyzes the economic impact of revoking tolerances on domestic food producers and on imported commodities. Then, EPA prepares and issues a preliminary notice in the Federal Register stating its intent to revoke certain tolerances and requesting comments from interested parties. If action levels are needed, EPA specifies what levels it intends to recommend to FDA or USDA. After the 60-day comment period has expired, EPA issues a final notice in the Federal Register announcing the effective date of the tolerances’ revocation and, if necessary, the final recommended action levels. Concerned that residues of canceled pesticides in food continue to pose a health risk to U.S. consumers, the Chairman, Environment, Energy, and Natural Resources Subcommittee, House Committee on Government Operations, asked GAO to (1) determine whether marketed foods contain unsafe levels of residues from canceled pesticides and (2) evaluate EPA’s procedures for revoking tolerances for canceled food-use pesticides. To determine whether marketed foods contain unsafe levels of residues from canceled pesticides, we focused on health risks from fish contaminated with residues of five canceled pesticides—DDT, chlordane, dieldrin, heptachlor, and mirex. We focused on these five canceled pesticides because—unlike most other pesticides—they are highly persistent in the environment and because EPA, the National Academy of Sciences, and other organizations agree that they pose significant health risks through dietary exposure. We focused on fish because fish are more likely than most other foods to accumulate residues of these canceled pesticides. In addition, data on the health risks of residues from other canceled pesticides on food commodities are sparse. But data on the health risks of residues from these five canceled pesticides in fish were readily available because EPA had assessed these risks in response to concerns expressed by Members of Congress, EPA regional officials, and environmental organizations. To evaluate EPA’s basis for proposing action levels for the residues of five canceled pesticides in fish, we reviewed EPA documents and Federal Register announcements on establishing action levels. We also reviewed EPA’s study of residue levels and health risks for these pesticides, as well as EPA’s analyses of the economic impact of lowering the action levels for residues of the five canceled pesticides in fish. To determine the trends in residue levels for these canceled pesticides, we reviewed FDA’s pesticide monitoring data for fish and fishery products and FDA’s total diet studies for all food commodities, from 1984 through 1992, for detections of DDT, chlordane, and dieldrin. We compared the current action levels with the actual residue levels detected in fish tested by FDA. Specifically, we compared the current action level for dieldrin residues in whitefish with the average dieldrin residues that FDA detected in testing domestic whitefish from 1984 through 1992. To demonstrate the effect of federal action levels on consumers of fish that are not tested by FDA because they do not enter interstate commerce, we interviewed EPA Office of Water officials and reviewed EPA studies on (1) the basis that states use to establish fish advisories, (2) the levels of contamination from pesticides and other chemicals that are found in fish nationwide, and (3) the guidance that the Office of Water provides to states on establishing fish advisories. We also examined a 1991 National Academy of Sciences study of seafood safety, which discusses the extent to which fish are contaminated by pesticides and other chemicals and the actions that are needed to better inform consumers of the potential risks of eating certain fish. In addition, we contacted several state health and environmental officials to find out how federal action levels affect their regulation of pesticides in fish and to determine the extent of their efforts to monitor these pesticides. To evaluate EPA’s process for revoking the tolerances for canceled pesticides, we interviewed key EPA officials who either were or had been involved in the revocation process, and we collected documents explaining EPA’s revocation policy and procedures and showing the status of EPA’s revocation efforts. We also examined the Federal Register notices for all pesticides whose tolerances had been revoked as of July 1994 to determine how many tolerances had been revoked and how much time had elapsed between cancellation and revocation. In addition, we reviewed EPA documents and information systems to determine how many canceled pesticides still have tolerances. When we reviewed data from EPA’s information systems on canceled food-use pesticides, we found that much of the information was unreliable and, therefore, could not be used. To identify canceled food-use pesticides that still have tolerances, we examined EPA planning documents and reregistration status reports and asked EPA officials to verify the information. In addition, we examined FDA fiscal year 1992 monitoring data to determine whether some canceled pesticides that still had tolerances were appearing in the U.S. food supply. We conducted our review between April 1993 and September 1994 in accordance with generally accepted government auditing standards. We discussed the facts and analysis presented in this report with responsible officials from EPA—including the Deputy Assistant Administrator, Office of Prevention, Pesticides, and Toxic Substances; the Deputy Director, Office of Pesticide Programs; and the Director, Policy and Special Projects Staff, Office of Pesticide Programs—and from FDA—including the Deputy Associate Commissioner for Regulatory Affairs; the Director, Office of Policy, Planning, and Strategic Initiatives, Center for Food Safety and Applied Nutrition; and the Director, Contaminants Policy Staff, Office of Regulatory Affairs. These officials generally agreed with the information presented but suggested a number of technical and editorial changes that we incorporated where appropriate. As requested, we did not obtain written agency comments on a draft of this report. According to EPA, most pesticides break down fairly quickly in the environment and therefore do not appear at significant levels in most foods. But a few pesticides whose registrations were canceled about 20 years ago have persisted in the environment. EPA believes that residues of these pesticides are present at low levels in most foods. However, they are found in some fish at levels that exceed EPA’s usual negligible risk standard. The action levels currently used to regulate residues of these canceled pesticides in fish do not meet criteria in EPA’s 1982 policy because they (1) are not based on an assessment of health risks and (2) have never been adjusted to reflect declines in residue levels that have occurred since FDA first set the action levels in the late 1960s and early 1970s. In 1991, after conducting a study that evaluated health and economic effects and more recent residue data, EPA proposed lower action levels to FDA. While FDA agreed that action levels should be lowered to reflect declines in residues of these pesticides in fish, it believed that EPA’s proposed levels would represent too great a reduction. Despite their shared belief that action levels should be lowered, neither agency has since taken any action to reach agreement on appropriate lower action levels. In its 1982 policy on revoking the tolerances for canceled pesticides, EPA established principles for recommending action levels to FDA and USDA that emphasized the importance of assessing health risks and actual residue levels in food. Under the policy, action levels would “be set limiting the quantity of a pesticide in or on food commodities to the extent necessary to protect the public health.” Although action levels would “tak into account the extent to which the contaminant is unavoidable,” they would be “sufficient to protect the public health.” In some instances, according to the policy, the health risk for a given pesticide could be so great that no residue level would be acceptable. In these instances, the policy calls for EPA to recommend action levels that do not exceed levels that FDA can detect using its current testing methods. Finally, the policy stated that EPA would periodically review action levels and lower them as residues of canceled pesticides in food declined. In 1985, EPA placed preliminary notices in the Federal Register announcing its intention to revoke the tolerances for DDT, chlordane, and dieldrin, whose registrations for food uses it had canceled during the 1970s. Because these pesticides’ residues persisted in the environment, EPA, in accordance with its 1982 policy statement, also proposed action levels to replace the revoked tolerances. For fish, EPA proposed to retain the action levels that FDA had been using since the late 1960s and early 1970s. In developing the action levels that it proposed for DDT, chlordane, and dieldrin in 1985, EPA primarily reviewed residue data from the 1970s, which indicated that residue levels in fish had not declined much since the action levels were originally established. In addition, EPA did not assess the health risks posed by these residues, as directed in its 1982 policy. In response to concerns about DDT expressed in a 1986 congressional hearing, EPA said that it had not assessed health risks when it proposed action levels to replace the tolerances or existing action levels for this pesticide. Instead, EPA reviewed FDA’s pesticide monitoring data and proposed action levels that reflected the actual levels of DDT residue found in foods monitored during the late 1970s. EPA officials told us that the proposed action levels for DDT and other chlorinated pesticides were set at a level high enough so that most—about 95 percent—of the residues found in foods would be at or below the action levels. EPA reasoned that the residues of these canceled pesticides were unavoidable and had not entered the food supply through the misuse of pesticides. Therefore, the agency did not want to penalize food producers for past legal uses of the pesticides. In response to its preliminary notices, EPA received no significant comments on the adequacy of the action levels it had proposed for foods other than fish. But a number of commenters—including two EPA regions—questioned the safety of the action levels proposed for fish. EPA’s Region VII noted that EPA apparently had not reviewed available health effects data, as required by its 1982 policy, to assess the safety of the proposed action levels. Similarly, EPA’s Region V commented that EPA had not assessed the effects of the proposed action levels on human health. According to several commenters, the risk of cancer under the proposed action levels was far greater than the agency’s risk standards usually allowed. For example, although EPA typically applies a negligible risk standard for cancer when regulating pesticides in food, EPA’s Region VII stated that the risk of cancer under the proposed action level for chlordane was 1 in 22,000, and the National Wildlife Federation estimated that the risk of cancer for dieldrin was 1 in 1,000. In commenting on the proposed action levels for fish, FDA maintained that the existing action levels might need to remain in effect because high levels of DDT residue were still being found in fish from at least one part of the United States. But FDA also said that EPA needed to study the health effects of these pesticides in light of the comments it received on its proposed action levels. In 1986, EPA issued final notices in the Federal Register in which it revoked the tolerances for DDT, chlordane, and dieldrin in foods other than fish and recommended action levels to replace the tolerances. But in response to concerns over the level of risk that would still be allowed under the proposed action levels for residues of these three canceled pesticides in fish, EPA announced that it would wait to recommend action levels for fish until it could obtain updated residue data and assess the health effects of alternative action levels. Later, EPA added two other canceled pesticides—heptachlor and mirex—to its study of action levels for fish (see app. I). To determine whether action levels should be revised, EPA conducted a study in which it reviewed recent residue data and evaluated health risks, as prescribed in its 1982 policy. In addition, EPA evaluated the economic effects of lower action levels. To conduct its study, EPA obtained recent exposure information by collecting national and regional data on residues of DDT, chlordane, dieldrin, heptachlor, and mirex in fish and compiled a data base from tests of about 11,000 samples conducted between 1983 and 1987 by FDA, EPA regions, state agencies and other federal agencies. Using these data, EPA estimated the risk of cancer to consumers of fish at various action levels. To assess the economic effects of alternative action levels for fish, EPA projected the percentage of the fish catch that would exceed lower action levels and estimated the costs to commercial fisheries of not being able to sell these fish. EPA’s analysis of the monitoring data showed that, for the five canceled pesticides, residue levels in fish generally appeared to be declining. However, in certain locations, residue levels appeared to remain constant or to increase over time. EPA attributed the apparent increases in residue levels in fish to the occasional stirring up of sediment and releasing of residues from the sediment into the water or to methodological issues, such as variation in the size of the fish sampled and nonrepresentative sampling. But EPA said that action levels could be lowered to reflect the generally declining levels of these pesticides’ residues in fish. We examined FDA’s records of dieldrin detections in whitefish from 1984 through 1992 to determine the trends in residue levels. As shown in figure 2.1, FDA’s records indicate that residue levels declined from an average of 0.313 parts per million (ppm) in 1984 to 0.120 ppm in 1992. Since 1984, average dieldrin residue levels have been consistently below the current action level of 0.3 ppm. However, FDA’s records also show that between 1990 and 1992 residues of dieldrin in fish have remained steady or have increased slightly. According to EPA officials, this slower rate of decline indicates that despite the general decline in residues of canceled chlorinated pesticides in fish, these residues may continue to appear at significant levels in some fish for a number of years to come. EPA’s Office of Pesticide Programs calculated the risks of cancer at current action levels to consumers of average amounts of fish that are sold in interstate commerce and that contain average levels of the five canceled pesticides’ residues. These calculations, presented in table 2.1, were based on national and regional data collected by EPA from federal agencies, states, and EPA regions. EPA’s analysis of the data showed that, at current action levels for the five pesticides, the dietary risks of cancer exceed the agency’s usual standard of negligible risk (1 in 1 million). According to EPA, the figures shown in table 2.1 could either overestimate or underestimate risks, depending on the extent to which actual exposure differs from the assumptions used in the calculations (see note following table 2.1). For consumers of average amounts of fish sold in interstate commerce, the figures may significantly overstate risks. For example, for dieldrin, EPA said that if it had used less conservative assumptions for samples in which no residues were detected, the risks calculated for this pesticide would have been only 2 in 1 million rather than 100 in 1 million. According to EPA, more accurate estimates of risk were not possible using available data. But EPA also noted that the actual risks could be considerably higher than the average risks shown in the table for consumers of larger amounts of fish or of fish that are more highly contaminated with residues of these pesticides. For example, although the calculations of risk in table 2.1 assume consumption of 15 grams of fish per day, EPA has estimated that typical recreational fishermen consume 30 grams per day and subsistence fishermen consume 140 grams per day, on average. These levels of consumption are about two to nine times greater than the levels EPA used to calculate the risks shown in table 2.1. Because risks are proportional to consumption, consumers of larger amounts of fish could be exposed to proportionately higher risks than are shown in the table. EPA computed the economic costs to commercial fisheries of implementing lower action levels, taking into account the estimated loss of nationally and regionally important fish species. The agency calculated a potential annual economic loss to commercial fisheries of either $74.3 million or $272.7 million, depending upon the action levels considered and assuming that FDA would identify and remove from commerce all fish that exceeded the action levels. For fish species considered to have national or local economic importance, EPA also estimated the percentage of fish that would exceed the current action levels and the lower action levels. These estimates indicated that a significantly greater percentage of fish would exceed the proposed lower action levels than would exceed the current action levels. For example, although none of the herring catch would exceed the current action level for dieldrin, 17 percent of the catch would exceed the lower action level for dieldrin that EPA proposed in 1991. Similarly, while 17 percent of the sablefish catch would exceed the current action level for DDT, 25 percent would exceed the proposed lower action level. In 1991, after reviewing its data on residues of the five pesticides in about 11,000 fish samples, assessing the health risks of each pesticide, and calculating the economic costs to commercial fisheries of implementing alternative action levels, EPA sent FDA a draft proposal to lower the action levels for residues of DDT, chlordane, dieldrin, heptachlor, and mirex in fish. These lower action levels are presented in table 2.2 along with the current action levels used by FDA. EPA officials told us that these action levels represent the agency’s balancing of health and economic effects, taking into account the unavoidability of these residues in fish. One year after EPA sent the draft proposal, EPA and FDA officials met to discuss these action levels. According to EPA’s Deputy Assistant Administrator for Prevention, Pesticides, and Toxic Substances, staff from both agencies believed that residues of the five pesticides had been declining in the environment and that lower action levels would therefore be appropriate. FDA agreed to review its pesticide monitoring data to see how much residues had declined and whether its data could support the lower action levels proposed by EPA. In September 1992, FDA concluded, after reviewing its monitoring data for 1989 to 1991, that EPA’s lower action levels would greatly decrease the allowable catch from the Great Lakes and a number of southern and western states. FDA said that EPA would therefore have to demonstrate and document the need for the lower action levels to protect consumers and show that the lower levels took into account the unavoidability of residues in fish. In May 1994, the Director of FDA’s Contaminants Policy Staff told us that EPA did not justify its proposed action levels to FDA. He said that FDA believes that residues of canceled chlorinated pesticides in fish have generally declined since the action levels were originally established and that the action levels should be lowered to reflect this decline. But, according to the official, EPA’s proposed lower action levels were significantly lower than they would be if they were based only on declines in residues in fish. Therefore, FDA needed adequate justification to explain to consumers and commercial fisheries the basis for the stricter standards and their potential economic impact. In May 1994, the Director of EPA’s Pesticide Registration Division told us that because of budgetary constraints, EPA has no foreseeable plans to obtain additional documentation to satisfy FDA’s concerns. He said that EPA considers its data sufficient to justify the lower action levels. However, EPA has not formally recommended the lower action levels to FDA. Hence, despite their agreement that the action levels should be lowered, neither agency has taken the initiative to reach agreement on appropriate lower action levels. Although federal action levels are based on national rather than regional or local data, many states use the federal action levels as their basis for determining when to issue fish consumption advisories. In 1990, EPA’s Office of Water reported that two-thirds of the states (34) were using federal action levels as their basis for evaluating the safety of chemical contaminants in fish. Other states were using a risk-assessment approach derived from EPA’s criteria or had developed their own approach. According to a 1991 National Academy of Sciences report on seafood safety, fish caught for recreation or subsistence may pose greater health risks than fish sold in interstate commerce because such fish are more likely to be caught near areas contaminated with hazardous chemicals (including pesticides) and may be consumed in greater quantities by certain subpopulations. The Academy reported that recreationally harvested fish may represent over one-fifth of the fish consumed in the United States and that these fish are caught by an estimated 17 million recreational fishermen. The Academy noted that state regulatory agencies are almost exclusively responsible for issuing seafood health advisories. But it said that states depend heavily on federal guidance in regulating seafood, and this guidance may not take into account specific regional variations in seafood safety. The Academy suggested that “a more consistent and focused effort in the determination and communication of public health risks from contaminated seafood should be developed” and that “a more pronounced and consistently defined federal role in the risk characterizations leading to these advisories would be .” In 1992, EPA’s Office of Water completed a study of chemical residues in fish that revealed widespread contamination by pesticides and other chemicals in fish. Concerned about this contamination and about states’ inconsistent procedures for sampling fish and issuing fish consumption advisories, the Office of Water issued guidance to the states in 1993 and 1994 to assist them in developing a risk-based approach for monitoring fish and determining when fish advisories should be issued (see app. II for more detailed information on the study and guidance). Office of Water representatives told us that it was too soon to evaluate the impact of this guidance. They did not know of any states that had used the guidance to strengthen their monitoring standards for pesticides’ residues in fish. An Office of Water official also said that a number of states are not active in monitoring fish and issuing fish advisories, principally because they lack funding. He said that other states, such as New York, recognize that the federal action levels are designed for FDA’s regulation of fish in interstate commerce but nevertheless continue to use the federal action levels in their own regulatory programs. A New York State environmental health official told us that although her agency believes that a risk-based monitoring approach would protect consumers’ health better than action levels, the agency is reluctant to move toward risk-based monitoring. The official explained that New York has a number of commercial fisheries whose catches are subject to FDA’s monitoring. Because pesticides and other chemicals in these fish do not generally exceed the federal action levels, the fish are sold legally in interstate commerce. The official said that, in view of FDA’s monitoring, the state believes that it would face an untenable position if it were to adopt more stringent risk-based monitoring standards for fish caught and consumed within the state. At the same time as the state was trying to justify stricter standards for fish that would be caught and consumed within New York, the official said, FDA would be allowing the same species of fish, with the same levels of chemical contamination, to be sold nationwide without any warnings or advisories. EPA’s Region V also noted that a number of states are reluctant to apply different state and federal standards in monitoring the safety of pesticides’ residues in fish. The region is concerned that if the action levels for pesticides’ residues in fish are not lowered, then the states will not issue more protective fish consumption advisories. The Director of FDA’s Contaminants Policy Staff agreed that states might have difficulty explaining differences between federal and state monitoring levels to local consumers and commercial fisheries. Nevertheless, he said that FDA could not set enforcement limits for local conditions because the action levels enforced by FDA apply nationwide to fish in interstate commerce. The Director noted that a state or locality might issue guidance on the amount of contaminated fish that consumers might eat without appreciable risk to their health. Although EPA believes that residues of the five persistent pesticides that it studied do not appear in most foods at significant levels, it has found that they appear in some fish at relatively high levels. The current action levels for these pesticides in fish, which FDA established about two decades ago, are based on residue levels found in the environment at that time. They have not been adjusted to reflect health risk assessments or subsequent declines in residue levels. Consequently, they are not consistent with the 1982 policy that calls upon EPA, when recommending action levels, to assess health risks as well as unavoidable residues and to revise its recommendations periodically as residue levels decline. The action levels that EPA proposed to FDA in 1991 are based on EPA’s assessment of residue data, health risks, and economic effects. Hence, these action levels were developed in accordance with the requirements of the 1982 policy. Both EPA and FDA agree that the action levels should be lowered but disagree on the extent to which they should be lowered on the basis of available data. We do not believe that the differences between EPA and FDA over the sufficiency of EPA’s data should block attempts by the agencies to reach agreement on appropriate action levels. Reaching agreement on appropriate action levels would help to ensure that consumers of both federally monitored and state-monitored fish are being adequately protected. To protect consumers from unreasonable exposure to the residues of canceled pesticides, we recommend that the Administrator of EPA and the Commissioner of FDA work together to determine, on the basis of the most recent data, the appropriate action levels for residues of the five canceled chlorinated pesticides in fish. We also recommend that the Administrator of EPA periodically reevaluate and lower action level recommendations to reflect decreases in environmental residue levels. EPA and FDA officials generally agreed with the information presented in this chapter but suggested a number of technical and editorial changes that we incorporated where appropriate. In particular, EPA officials believed that our presentation of EPA’s data on health risks posed by residues of canceled chlorinated pesticides in fish overstated the health risks. We revised our presentation of EPA’s data to highlight the uncertainties in the data and to include only the information that EPA considered to be the most valid. We also discussed with these officials the potential effectiveness of the actions that we recommend in this report. The EPA officials agreed that actions such as we recommend are necessary to resolve the problems we identified in connection with action levels. The FDA officials agreed with the thrust of our recommendations on action levels. In canceling the registrations for many food-use pesticides during the past two decades, EPA has not concurrently revoked the related tolerances for these pesticides. Although EPA has recently taken action to revoke the tolerances for some older canceled pesticides, an undetermined but potentially large number of canceled pesticides still have tolerances. On average, EPA has taken over 6 years to revoke a pesticide’s tolerances after canceling the pesticide’s registrations. Although part of this delay is intended to allow food treated with remaining stocks of a canceled pesticide to clear the channels of trade, a greater part is attributable to the low priority that EPA has assigned to revocation and to the absence of procedures linking revocation to cancellation. As long as the tolerances for canceled pesticides remain in effect, foods containing allowable amounts of these pesticides’ residues can legally enter the U.S. food supply. FDA and USDA cannot consider such foods adulterated and cannot take enforcement action against them. Over the past few years, EPA has stepped up efforts to revoke the tolerances for pesticides whose registrations for food uses were, for the most part, canceled during the 1980s. As of July 1994, EPA had revoked the tolerances for 50 canceled pesticides and had formally proposed to revoke the tolerances for 31 canceled pesticides (see apps. III and IV). According to EPA officials, these revocations have dealt with the major pesticides that pose a dietary risk to the public, such as DDT, chlordane, and toxaphene. Most of these revocation actions occurred during the past 2 years. EPA has not been able to determine how many more canceled pesticides have tolerances that should be revoked because its data bases do not identify all pesticides whose registrations for some or all food uses have been canceled. But an EPA official responsible for revocations estimated that over 100 pesticides may fall into this category and that hundreds of associated tolerances remain in effect for these canceled pesticides. This official believes that the food-use registrations for most of these pesticides have been canceled for over 2 years. In the past, EPA tried to define the universe of canceled pesticides that still had tolerances to be revoked. For example, in early 1992, EPA identified 98 pesticides whose tolerances it considered “probable” candidates for revocation. Over 2 years later, EPA has not begun to revoke the tolerances for nearly half of these pesticides, and the agency is again attempting to identify the canceled pesticides that still have tolerances to be revoked. Though difficult to determine because of deficiencies in EPA’s data bases, the number of canceled pesticides that still have tolerances may be sizable. From EPA’s List A—the group of pesticides assigned the highest priority for reregistration—we identified 10 pesticides whose registrations for food uses were all canceled between 3 and 13 years ago. There are 185 tolerances that remain in effect for these canceled pesticides. For one of the pesticides, cyhexatin, all of the registrations were canceled voluntarily by its manufacturers in 1987 after EPA considered initiating a special review in response to concerns over the pesticide’s potential to cause birth defects. However, as of July 1994, EPA had not begun to revoke cyhexatin’s tolerances. As of April 1994, EPA had evaluated approximately 100 out of about 600 active ingredients that are undergoing reregistration. As it continues its evaluations through reregistration and special review, the registrations for other pesticides are likely to be canceled and their tolerances will then need to be revoked. Since 1982, when it issued its policy on revoking tolerances, EPA has taken over 6 years, on average, to revoke a pesticide’s tolerances after canceling the pesticide’s registrations. Typically, the agency has allowed some time—usually about 2 years—for remaining stocks of a pesticide to be used and for products legally treated with the pesticide to move through commerce. But most of the delay in revocation can be attributed to both the low priority that EPA has assigned to revocation and the absence of procedures linking revocation to cancellation. The regulatory history of the pesticide bufencarb illustrates this pattern: Although EPA had canceled all registrations for food uses of bufencarb by April 1986, it did not propose to revoke the related tolerances until August 1992, and it did not complete their revocation until June 1993, over 7 years later. EPA assigns low priority to revoking tolerances, in part because revocation is not required by law. When the agency first canceled registrations for food-use pesticides during the 1970s, it had no mandate or guidance directing it to revoke the related tolerances. Although EPA’s 1982 policy established the principle that revocation should coincide with cancellation, it created no time frames for revocation or impetus for linking revocation to cancellation. EPA officials told us that the agency assigns low priority to revoking tolerances because many of the food-use pesticides have been canceled voluntarily or pose little or no dietary risk. But the fact that a pesticide has been canceled voluntarily does not necessarily mean that it poses no dietary risk. In addition, EPA has delayed revocation for almost all pesticides, including those considered to pose dietary risks. For example, long-term exposure through the diet to the pesticide heptachlor could cause cancer in humans, according to EPA, yet the agency revoked the tolerances for this pesticide more than 11 years after canceling its registrations for food uses. The low priority assigned to revocation is reflected in the limited resources allocated to it. The EPA official responsible for revocation said that none of the 13 staff in her unit works full-time on revocation actions and slightly fewer than 3 full-time-equivalent staff per year are allocated for such actions. According to this official, her unit has so many other higher-priority responsibilities that it can handle only a limited number of revocation actions at one time. While recognizing that the tolerances for many other canceled pesticides still required revocation, she said that she currently did not have the time available to identify these pesticides. Most of the EPA personnel involved in revocation told us that revocation activities have a lower priority than their other responsibilities. To conserve the limited resources that it has allocated to revocation, EPA usually delays the revocation of a pesticide’s tolerances until it has canceled all of the pesticide’s registrations for all of its food uses. Thus, it avoids taking multiple revocation actions for a single pesticide. For example, EPA canceled all registrations for insecticidal uses of the pesticide sodium arsenite in 1988 because of concerns about the pesticide’s toxic effects on workers and the general public. But the agency did not propose to revoke the tolerances for these uses until the registration for one remaining food-use—as a fungicide on grapes—was canceled in 1992, about 4 years later. Despite its policy supporting a link between revocation and cancellation, EPA has not developed written procedures or guidelines specifying when it should revoke a pesticide’s tolerances for canceled food uses. As a result, the agency has taken anywhere from a few months to over 14 years to revoke the tolerances for individual pesticides. Without such procedures, EPA is under no pressure to revoke tolerances in a predictable or consistent way. In addition, EPA has no written procedures or guidelines requiring the officials responsible for handling cancellations to notify the officials responsible for revoking the related tolerances of any cancellations. Consequently, the personnel responsible for revocations often do not receive information about cancellations on a timely and consistent basis and in a standard format that provides all of the information needed to revoke tolerances. Without reliable channels of information and communication between the personnel responsible for cancellations and the personnel responsible for revocations, EPA cannot effectively implement its policy linking revocation to cancellation. EPA officials acknowledge that the agency’s current process for revoking tolerances takes too long and is inefficient. According to the officials, the establishment of a process linking revocation to cancellation would help ensure that the tolerances for canceled pesticides are revoked in a timely manner. Furthermore, the officials said that taking revocation and cancellation action concurrently would generally result in the more efficient use of EPA’s resources than the current revocation process. They emphasized that although the revocation action could occur at the same time as the cancellation action, EPA would need, when establishing an effective date of revocation, to give growers enough time to use exisitng stocks of the canceled pesticide. As long as the tolerances for residues of canceled pesticides remain in effect, FDA and USDA can do nothing to prevent foods containing allowable amounts of these chemicals from entering the U.S. food supply. Although a pesticide’s residues usually decline or disappear in domestic foods within a few years after the pesticide’s registrations have been canceled—except when the pesticide persists in the environment—the pesticide’s residues may continue to appear in imported foods. U.S. and foreign manufacturers may continue to sell the canceled pesticide for use on crops abroad, and the food grown abroad may be sold in the United States as long as the pesticide’s residues do not exceed the tolerances. Determining the extent to which canceled pesticides that still have tolerances appear in foods is difficult, not only because EPA has not identified all of these pesticides but also because FDA monitors residues selectively. Nevertheless, in reviewing FDA’s 1992 monitoring data, we found that FDA had detected demeton, a pesticide whose registrations were canceled nearly 5 years ago, seven times on four different food commodities that still had tolerances. Because the tolerances for demeton had not been revoked, FDA did not consider the foods containing the pesticide to be adulterated. Similarly, assessing the health risks posed by canceled pesticides that still have tolerances is difficult. Information on the health risks of many canceled pesticides is limited because EPA’s data bases are incomplete or because the registrant voluntarily canceled the registrations before EPA finished assessing the pesticide’s health risks. For example, after the registrations for food-uses of the pesticide tetrachlorvinphos (a possible human carcinogen) were canceled in 1987, EPA stated that it did not have sufficient scientific data to evaluate the safety of the pesticide’s tolerances—which were established under less stringent scientific standards in the early 1970s. Despite this uncertainty about the pesticide’s dietary risks, EPA did not propose to revoke the pesticide’s tolerances until 1994—about 7 years after the associated food-use registrations were canceled. For some canceled pesticides that still have tolerances, available data indicate a potential to harm humans. For example, the pesticide monocrotophos is a potent cholinesterase inhibitor (linked to nervous system problems) and is toxic to fetuses. Another pesticide, captafol, is classified by EPA as a probable human carcinogen. Both of these pesticides were voluntarily canceled 6 or more years ago, but EPA has not yet revoked their tolerances, although it proposed to do so in June 1993. Because EPA has not consistently or expeditiously revoked the tolerances for many canceled pesticides, residues of these pesticides have been allowed to appear legally in the food supply, often for many years after the pesticides’ domestic uses were prohibited. Although EPA believes that it has revoked the tolerances for most of the older, higher-risk canceled pesticides, it still needs to identify and revoke the tolerances for a substantial number of other canceled pesticides. By revoking the tolerances for canceled pesticides and, in the future, conducting revocation and cancellation actions concurrently, EPA would be acting consistently with its 1982 policy statement. In addition, it would be streamlining its process for revoking tolerances and eliminating the potential for residues of the same pesticides that it has deemed unacceptable for use on crops to appear legally in food. To expedite the revocation of tolerances for canceled pesticides and make more efficient use of scarce resources, we recommend that the Administrator, EPA, establish procedures for concurrently conducting tolerance revocation and cancellation actions and, when necessary, set an effective date for revocation that gives growers enough time to use existing stocks of the canceled pesticide and identify the pesticides whose registrations for food uses have already been canceled and revoke their tolerances. EPA officials generally agreed with the information presented in this chapter but suggested a number of technical and editorial changes that we incorporated where appropriate. The officials agreed that actions such as we recommend are necessary to resolve the problems we identified regarding tolerance revocations.
|
Pursuant to a congressional request, GAO reviewed: (1) whether marketed foods contain unsafe levels of residues from cancelled pesticides; and (2) the Environmental Protection Agency's (EPA) procedures for revoking tolerances for cancelled food-use pesticides. GAO found that: (1) EPA believes that most marketed foods do not contain unsafe levels of residues from cancelled pesticides, since most pesticides do not persist in the environment for very long; (2) residues from a particular class of cancelled pesticides do persist, particularly in fish, and pose a health risk to some consumers over their lifetimes; (3) in 1991, EPA proposed lower action levels for five cancelled pesticides in fish to reflect the decline in actual residue levels; (4) the reduced action levels have not been implemented because the Food and Drug Administration (FDA) believes that EPA has not fully demonstrated the need for lower action levels; (5) many state monitoring programs would be affected by lower action levels, since they use federal standards in issuing fish consumption advisories; (6) EPA has taken over 6 years to revoke tolerances for cancelled pesticides; (7) the process for revoking tolerances takes too long and makes inefficient use of scare resources; (8) linking residue revocations to pesticide cancellations would be more efficient and would reduce consumers' exposure to pesticide residues in imported food; and (9) although EPA has made progress in revoking tolerances for cancelled pesticides, its revocation backlog is expected to increase because of additional pesticide registration cancellations.
|
Down syndrome is most frequently caused by a chromosomal error that produces an extra copy of chromosome 21. The extra chromosomal material causes children with Down syndrome to have mental and physical differences and a greater risk of developing certain medical problems, such as hearing loss, eye disease, and congenital heart defects. (See table 1.) Because of this heightened risk, the American Academy of Pediatrics recommends that children with Down syndrome be closely screened throughout childhood for certain medical conditions. The overall well-being of some families of children with Down syndrome can be affected by the special needs that their children may have. Research shows that these families experience more stress than families of typically developing children. In addition, according to the NS-CSHCN, 21 percent of families of children from birth through age 17 with Down syndrome in the United States needed mental health care or family counseling in the previous year, and 26 percent experienced financial problems as a result of their child’s health care issues. Research shows that families can benefit from family support resources, such as parent support groups where information and stories can be informally exchanged. In fact, connecting a new parent to other parents, such as through a parent support group, has been shown to be among the most helpful resources a physician can provide during the first conversation. Research has shown that families of children with Down syndrome do not receive enough accurate information and emotional support at the time of diagnosis and as the child ages. A 2005 study that surveyed 985 mothers who received a postnatal diagnosis of Down syndrome for their children indicated that when they learned of their child’s diagnosis their physicians had not provided them with a satisfactory amount of up-to-date printed materials or telephone numbers of parents who already had a child with Down syndrome. Another study found that families received some information from health care providers that they perceived as vague, inaccurate, or outdated. Although there are studies such as these and other initiatives that focus on the first conversation between the health care provider and the family, there is very little research addressing subsequent conversations between the health care provider and the family as the child ages. Down syndrome clinics, which are usually located in larger cities across the United States, are a source of specialty medical care for children with Down syndrome. Pediatricians and family physicians vary widely in terms of their experience treating children with Down syndrome and refer patients to Down syndrome clinics as needed. The Down syndrome clinics are typically associated with medical schools or large hospitals and may include geneticists, developmental pediatricians, therapists, nutritionists, nurse practitioners, and genetic counselors. Families may visit these clinics on an annual basis to assess their child’s development and to ensure that any health conditions have been properly diagnosed. of children with multiple medical problems may visit these clinics more frequently to ensure that their child is receiving appropriate specialty care. In addition to caring for children, these clinics also support families by, for example, providing information about Down syndrome and referring families to community resources. According to a physician at a Down syndrome clinic, it is recommended that children with Down syndrome visit a Down syndrome clinic three to four times in the first year of life, two times in the second year of life, and annually every year after that, if needed. administered by state-level agencies—beginning at birth and continuing until the age of 3. The Individuals with Disabilities Education Act Part C program was created to provide infants and toddlers who have disabilities (or are at risk of developing a disability) and their families with early intervention services, such as speech therapy, occupational therapy, and family counseling. We previously reported on research that found that the earlier a child with disabilities receives early intervention services, the more effective these services may be in enhancing the child’s development. Parents may be referred to early intervention programs by their child’s doctor, or they may seek out these services themselves. There is also a widespread network of advocacy groups to support children with Down syndrome and their families. In addition to numerous national disability organizations, there are two national Down syndrome- specific organizations with over 300 local advocacy groups located across the country. They range in size from small parent support groups to larger organizations that provide services to families and their children. Advocacy groups support children with Down syndrome and their families by, for example, organizing activities for children, serving as information resources, and offering parent support groups. From birth through early childhood, children with Down syndrome received, on average, five times more outpatient care and over two times more office-based care than children without Down syndrome, according to our analysis of data from a private health insurance company. For children under 1 year of age, the average number of outpatient services was 10.4 for children with Down syndrome and 1.9 for children without. Similarly, the average number of office-based services for children under 1 year of age was 20.0 for children with Down syndrome and 10.7 for children without. As children with and without Down syndrome moved through early childhood, both groups received more office-based services than outpatient services. However, while the amounts of outpatient and office-based services decreased over time, the differences in the amounts of outpatient and office-based services between the two groups remained. (See fig. 1.) Across all types of services, children with Down syndrome from birth through age 4 received more outpatient and office-based services than children without. (See fig. 2.) For example, for both outpatient and office- based services, children with Down syndrome had more evaluation and management services, more medical procedure services, and more therapy services. Specifically, for outpatient services, children with Down syndrome had 3 times more evaluation and management services, 10 times more medical procedure services, and 22 times more therapy services than children without. For office-based services, children with Down syndrome had 2 times more evaluation and management services, 2 times more medical procedure services, and 25 times more office-based therapy services than children without. In addition, children with Down syndrome had 6 times more outpatient anesthesiology and surgery services than children without. We found other differences within the types of services received, such as greater percentages of children with Down syndrome receiving services such as thyroid, cardiac, and hearing tests than other children. For example, our review of the outpatient services found that 21 percent of children with Down syndrome under 1 year of age had a specific thyroid function test, compared to 1 percent of other children of the same age. In addition, children with Down syndrome were more likely than other children to receive an influenza vaccination; for example, 30 percent of 4- year-olds with Down syndrome received the influenza vaccine, compared to 15 percent of other children of the same age. A key difference in the amount of outpatient and office-based care received by children with Down syndrome and other children was the difference in the amount of therapy services received. Our analysis of therapy usage showed that the percentage of children with Down syndrome who received physical, occupational, and speech therapy— therapies that Down syndrome specialists say are important for children with Down syndrome to receive to maximize their development—was much higher than it was for other children. For example, 50 percent of children with Down syndrome, birth through age 4, received physical therapy services, compared to 3 percent of other children. This represented an average of 30 physical therapy claims per child with Down syndrome, compared to an average of less than 1 physical therapy claim per child without. This difference in the amount of children who received therapy services was evident in each age group and for each therapy type. (See fig. 3.) The Medicaid data that we reviewed from seven states also show that children from birth through age 4 with Down syndrome who were enrolled in Medicaid in 2007 received more outpatient and office-based care to address their special health care needs than other children of the same age. For example, among the seven states, children with Down syndrome received 2.7 to 5.3 times more outpatient services and 1.6 to 4.5 times more office-based services than children without Down syndrome. (See app. I for more Medicaid data.) According to our analysis of inpatient care data from a large private health insurance company, from birth through early childhood, children with Down syndrome were hospitalized, on average, nearly twice as often and stayed twice as long as children without Down syndrome. The differences in the average number of hospitalizations and the average length of stay were most pronounced in the first years of life and diminished by age 4. (See figs. 4 and 5.) For example, for children with Down syndrome under 1 year of age, the average number of hospitalizations was 2.2, and the average length of stay was 7.6 days. In contrast, for children of the same age without Down syndrome, the average number of hospitalizations was 1.1, and the average length of stay was 2.1 days. In an older group—children 4 years of age—children with and without Down syndrome were hospitalized about the same number of times, an average of 1.3 times for children with Down syndrome and an average of 1.2 times for children without, and for about the same length of time, an average of 2.0 days for children with Down syndrome and 1.7 days for children without. Our review of inpatient claims data showed some differences in the types of hospitalizations for children with Down syndrome compared to other children. For example, the most common type of hospitalization for children with Down syndrome under 1 year of age was cardiothoracic- related surgery; 6 percent of children under 1 year of age with Down syndrome had this hospitalization type, compared to 0.03 percent of other children. Furthermore, while other hospitalization types—such as bronchitis and asthma, pneumonia, and ear issues—appeared as common types of hospitalizations in both groups, the percentage of children with Down syndrome hospitalized for these reasons was higher. The Medicaid data that we reviewed from seven states also show that children from birth through age 4 with Down syndrome who were enrolled in Medicaid in 2007 generally had more inpatient care. Children with Down syndrome had more hospitalizations (in six of the seven states) and longer hospital stays to address their special health care needs than other children of the same age. For example, among the seven states, children with Down syndrome had 1.0 to 7.4 times more hospitalizations and 1.5 to 10.2 times longer stays than children without Down syndrome. (See app. I for more Medicaid data.) In our review, the total average medical expenditures for children with Down syndrome, from birth through early childhood, were an average of five times higher than the expenditures for children without Down syndrome; however, both total expenditures and the difference in expenditures decreased substantially by the time children with Down syndrome were 3 years of age. (See fig. 6.) The expenditures were also higher for children with Down syndrome for each type of medical care— outpatient, office-based, and inpatient care. Inpatient care for children under 1 year of age had the greatest difference, with average expenditures of almost $43,000 for children with Down syndrome and $2,000 for children without. The difference in expenditures reflects the fact that children with Down syndrome had a higher utilization of medical care or more expensive medical services than children without Down syndrome. Down syndrome advocacy groups in selected communities told us that families in those communities were likely to receive many, but not all, of the resources that Down syndrome clinic specialists recommended they receive at the time of diagnosis. The specialists from six Down syndrome clinics we interviewed recommended 32 resources. (See table 2.) Advocacy groups reported that families were likely to receive about two- thirds (20 of 32) of the recommended resources; these resources were generally directly related to the health of children with Down syndrome, such as information about the risk of cardiac problems and the need for thyroid screening. Families were less likely to receive about one-third (10 of 32) of the recommended resources; these resources were generally related to the family’s understanding of Down syndrome and overall family well-being, such as a copy of the Down syndrome-specific health care guidelines and information about the causes of Down syndrome and the effect of Down syndrome on the family and caregivers. The time of diagnosis is a key time for children with Down syndrome and their families. According to the Down syndrome clinic specialists, if newborns are not tested for certain medical conditions immediately after diagnosis, serious and even life-threatening consequences can occur. In addition, specialists from one Down syndrome clinic noted that this is a key time for families to be given information to help them understand how their child’s diagnosis may affect their family. However, according to the Down syndrome clinic specialists, families can be overwhelmed if too much information is presented at the time of diagnosis, especially if they are already overwhelmed emotionally and psychologically from receiving the diagnosis. All of the specialists we interviewed at the six Down syndrome clinics agreed that if families do not receive resources recommended for the time of diagnosis, the health consequences for the child could be severe. For example, if a newborn’s heart defect is not detected early, he or she may experience serious complications and even death in the first days or weeks of life. If a newborn’s hypothyroidism—which can be easily treated—is not detected early, he or she may experience additional cognitive impairment or other complications. If a family is not provided with a copy of the Down syndrome-specific health care guidelines, they may not be fully aware of the health risks their child may face, and they may be less effective advocates. (See table 2 for these and other health consequences that may occur if these resources are not received by families.) Advocacy groups told us that if there were gaps in the resources that families received from their health care providers upon diagnosis, advocacy groups and other community organizations sometimes provided the missing material. For example, advocacy groups sometimes drop off “New Parent Packets” at area hospitals that include the Down syndrome- specific health care guidelines and information about what Down syndrome is and how it can affect the family. Advocacy groups also offer family support groups, including groups geared specifically toward grandparents and fathers, and host seminars on financial planning. In contrast to the time of diagnosis, Down syndrome advocacy groups in selected communities told us that families of children with Down syndrome in those communities were less likely to receive most of the recommended resources from their health care providers for early childhood. These resources are important to their children’s ongoing health and the well-being of their families. The specialists from six Down syndrome clinics we interviewed recommended 23 resources that families should receive through their health care providers after diagnosis and throughout early childhood. (See table 3.) Advocacy groups reported that families were likely to receive only about one-quarter (6 of 23) of these resources. For example, resources that families were likely to receive included information about the need to screen for celiac disease, the need for vision screening, and the risk for upper respiratory infections. But families were less likely to receive about three-quarters (17 of 23) of the resources recommended for early childhood. For example, families were less likely to receive information about the need to see a pediatric dentist, how to prevent obesity, and the importance of communicating with their child. In addition, families were less likely to receive a copy of a Down syndrome-specific growth chart. According to the Down syndrome clinic specialists, some information is most useful if provided in early childhood rather than at the time of diagnosis. For example, information about celiac disease is not necessary at diagnosis because it usually is not detectable until the child has begun eating solid foods. According to the clinic specialists, if families do not receive the resources recommended for early childhood, there may be health consequences for the child. For example, if a child’s poor vision is not detected, he or she may develop permanent vision loss. Similarly, if a child’s celiac disease is not treated, the child’s growth may be affected and he or she may develop diarrhea, constipation, and behavioral changes. (See table 3 for these and other health consequences that may occur if these resources are not received by families.) Advocacy groups told us that if there were gaps in the resources that families received from their health care providers in early childhood, advocacy groups and other community organizations sometimes provided the missing material. For example, one advocacy group initiated a support group for families of children with Down syndrome who also have other medical conditions, such as autism. In addition, advocacy groups provide social development opportunities for children with Down syndrome by hosting playgroups, providing information about the Special Olympics to families, and sponsoring members to attend national and state conferences. Some community organizations also offer social opportunities for children, including children with Down syndrome, such as baseball leagues and swimming classes. According to Down syndrome advocacy groups, families in their communities may face barriers that can prevent them from using available resources, which can have a significant impact on the child and the family. (See table 4.) For example, barriers such as outdated or inaccurate information may lead parents to have a limited understanding of their child’s Down syndrome diagnosis and, as a result, underestimate their child’s potential. Important resources, such as early intervention therapy services and parent support groups, can be out of reach for some families who face barriers. For instance, advocacy groups identified barriers related to difficulty communicating in English, a lack of transportation, lengthy travel times to appointments (because of distance to resources or geographic location), or busy work schedules (which prevent them from accessing certain resources, such as early intervention therapy services and doctor appointments, that may only be available during the workweek). Furthermore, advocacy groups mentioned that culture can be a barrier to accessing resources. For example, in some communities, parents of children with Down syndrome from other countries were reluctant to seek resources because of concerns about their community’s social acceptance of people with Down syndrome. Results of the 2005-2006 NS-CSHCN also showed that families of children with Down syndrome may have trouble accessing needed services. The survey indicated that of the families of children with Down syndrome, birth through age 17, in the United States who needed a referral in the previous 12 months, an estimated 24 percent had problems obtaining referrals. Similarly, of the families whose children needed physical, occupational, or speech therapy in the previous 12 months, 18 percent of their children did not receive all needed therapies. In addition, 16 percent of families of children with Down syndrome reported that they faced barriers using needed resources in the previous 12 months. Some of the most commonly cited barriers were as follows: not getting services when their child needed them, not getting needed information, having problems finding service providers with needed skills, not having the types of services their child needed in their area, and having problems in communication between service providers. Except for problems in communication between service providers, each of these barriers was also mentioned in our interviews with advocacy groups. Some advocacy groups reported that they and their communities have made efforts to address some of the barriers faced by families related to inaccurate information, financial issues, language, and transportation. To address issues of inaccurate information, one advocacy group initiated an educational outreach program to health care professionals at area hospitals to share important information about Down syndrome, including contact information for local support groups and suggestions for giving a Down syndrome diagnosis to a family. Some advocacy groups made efforts to address financial issues; for example, some advocacy groups arranged for financial advisors to speak to parents at workshops. In addition, some advocacy groups made efforts to address language barriers by translating materials into Spanish and having a staff person available who spoke Spanish. Finally, several advocacy groups told us that they were taking steps to address barriers related to transportation. For example, an advocacy group located in an urban area established four satellite community groups in outlying areas so that families could access resources without driving into the city. We provided a draft of this report to the Secretary of Health and Human Services for comment. In response, the Department of Health and Human Services (HHS) provided us with general comments, which are reprinted in appendix II, and technical comments that we incorporated as appropriate. In its general comments, HHS indicated that our report “presents a thorough summary of the current practices and the successes and challenges faced by children with Down syndrome and their families.” HHS emphasized the importance of early intervention services in maximizing children’s long-term development. The agency also suggested that cost-benefit analyses, which were beyond the scope of this review, could inform decisions about providing health care services to children with Down syndrome. HHS also suggested that we compare the results of the data analyses from the private health insurance data, the Medicaid data, and the NS-CSHCN data. As we noted earlier in this report, detailed comparisons across the private health insurance and Medicaid data would not be appropriate because of differences in the underlying insurance coverage. Finally, HHS suggested that we provide population sizes for the data sets analyzed, which we have done. We are sending a copy of this report to the Secretary of Health and Human Services. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Figures 7 through 10 show that of the children enrolled in Medicaid in 2007, children with Down syndrome from birth through age 4 received more medical care than children without Down syndrome in the seven states in our study. Children with Down syndrome had more outpatient and office-based services than children without Down syndrome in each state we reviewed. (See fig. 7.) In addition, children with Down syndrome generally had more hospitalizations and a longer average length of stay than children without Down syndrome. (See figs. 8 and 9.) Medicaid expenditures were higher for children with Down syndrome than for children without Down syndrome for outpatient, office-based, and inpatient care. (See fig. 10.) In addition to the contact named above, Jenny Grover, Assistant Director; Julianne Flowers; Rich Lipinski; Sarah-Lynn McGrath; Julie E. Pekowski; Roseanne Price; Laurie F. Thurber; Karin J. Wallestad; and Jennifer Whitworth made key contributions to this report.
|
On October 8, 2008, the Prenatally and Postnatally Diagnosed Conditions Awareness Act was signed into law, requiring GAO to submit a report concerning the effectiveness of current health care and family support programs for the families of children with disabilities. In this report, GAO focused on Down syndrome because it is a medical condition that is associated with disabilities and occurs frequently enough to yield a sufficient population size for an analysis. GAO examined (1) what is known about the extent to which children with Down syndrome receive medical care during early childhood and (2) what resources families of children with Down syndrome receive through their health care providers and what barriers families face to using these resources. GAO analyzed fee-for-service claims data from a very large private health insurance company, for the claims representing its experience with one of the largest national employers, and Medicaid claims data from seven states with high Medicaid enrollment and low percentages of enrollees in Medicaid managed care. GAO also interviewed specialists at six prominent Down syndrome clinics and 12 advocacy groups to examine what resources families receive and to identify barriers they face. GAO also analyzed data from the Health Resources and Services Administration-sponsored 2005-2006 National Survey of Children with Special Health Care Needs on barriers to accessing needed services. GAO's analysis of data from a very large private health insurance company showed that from birth through early childhood, children with Down syndrome received medical care to address their special health care needs. Specifically, children with Down syndrome received, on average, five times more outpatient care (such as care in an urgent care facility) and over two times more office-based care (such as care in a physician's office) than children without Down syndrome. Overall, both groups received more office-based care than outpatient care. A key difference in the amount of care received by children with Down syndrome was the difference in the amount of therapy services, with a greater percentage of children with Down syndrome receiving physical, occupational, and speech therapy. In addition, children with Down syndrome have an increased risk of certain medical conditions and were hospitalized, on average, nearly twice as often and stayed twice as long as other children. Not surprisingly, differences were also found in medical care expenditures. The total average medical expenditures for children with Down syndrome were an average of five times higher than those for other children. However, both total expenditures and the difference in expenditures decreased substantially as the two groups of children reached 3 years of age. GAO's analysis of Medicaid claims data found similar differences between the two groups. Down syndrome advocacy groups in selected communities told GAO that families of children with Down syndrome in those communities were more likely to receive the resources recommended for the time of diagnosis than those recommended for early childhood and may face barriers to using available resources. Specifically, advocacy groups reported that families were likely to receive about two-thirds (20 of 32) of the resources that specialists at the six Down syndrome clinics recommended they receive through their health care providers at the time of diagnosis. However, families were likely to receive only about one-quarter (6 of 23) of the resources that specialists recommended they receive through their health care providers after diagnosis and throughout early childhood. In addition, advocacy groups and results from the National Survey of Children with Special Health Care Needs indicate that families may face barriers that can prevent them from using available resources. For example, barriers such as outdated or inaccurate information could lead parents to underestimate their child's potential. Some advocacy groups reported that they and their communities have made efforts to address some of these barriers. For example, to address issues of inaccurate information, one advocacy group initiated an educational outreach program to health care professionals at area hospitals. GAO provided a draft of this report to the Department of Health and Human Services for comment. It generally agreed with GAO's findings and noted that the report provides a thorough summary of the current practices and the successes and challenges faced by children with Down syndrome and their families.
|
As Southeast Asian countries, Indonesia and Vietnam are in a region of growing economic power. ASEAN, to which both countries belong, is seeking to form an economic community by the end of 2015 that would deepen economic integration among the 10 ASEAN member states (see fig. 1). World Bank data show that from 2000 through 2014, the collective real gross domestic product (GDP) of ASEAN countries increased by approximately 98 percent. According to International Monetary Fund (IMF) data, if the ASEAN countries were a single nation, their collective GDP in 2014 would represent the seventh-largest economy in the world. ASEAN countries are also important strategically, in part because they are located astride key sea lanes between the Persian Gulf and the economic centers of East Asia. On the basis of a 2011 United Nations (UN) Conference on Trade and Development Review Maritime Transport, the U.S. Department of Energy estimated that more than half of the world’s annual merchant fleet tonnage passed through the South China Sea, which is bordered by Indonesia and Vietnam. According to data from the World Bank, Indonesia’s real GDP increased by around 108 percent from 2000 to 2014. However, the World Bank estimated that in 2011, 16 percent of Indonesians lived below the poverty line of $1.25 per day. Indonesia is the world’s fourth-largest country by population. The United States established diplomatic relations with Indonesia in 1949, after Indonesia gained independence from the Netherlands. According to State, Indonesia’s democratization and reform process since 1998 has increased its stability and security and resulted in strengthened U.S.- Indonesia relations. In 2010, the United States and Indonesia officially launched the United States–Indonesia Comprehensive Partnership to broaden, deepen, and elevate bilateral relations between the two countries on a variety of issues, including economic and development cooperation. However, according to U.S. agencies, the U.S.-Indonesia bilateral relationship continues to face significant challenges because of Indonesia’s implementation of protectionist laws, limited infrastructure, and unevenly applied legal structure. U.S. agencies’ stated goals for Indonesia include supporting the facilitation of U.S. trade and investment between the two countries. The U.S. Embassy in Indonesia is located in Jakarta, with U.S. consulates in Surabaya and Medan and a U.S. consular agency in Bali. China and Indonesia have a long-standing history of trade and interchange. The two countries established diplomatic relations in 1950, 5 years after Indonesia gained independence from the Netherlands. Relations between China and Indonesia were suspended in 1967, after the Indonesian government suspected China of complicity in planning a 1965 coup, but were restored in 1990. Since then, trade and economic relations between the two countries have grown rapidly and in 2013, both countries agreed to elevate bilateral relations to a comprehensive strategic partnership. The partnership seeks to strengthen cooperation in several key areas, including trade, investment, and economic development. In 2015, the countries reaffirmed their support of the partnership and agreed, among other things, to expand market access and two-way investment for firms and to deepen their infrastructure and industrial cooperation. In April 2015, the Presidents of China and Indonesia released a statement setting a bilateral trade target of $150 billion by 2020—an increase of $70 billion from the 2015 target of $80 billion. The two Presidents stated that they will work toward the reduction of tariff and nontariff trade barriers and increase the frequency of trade missions between the two countries. China maintains an embassy in Jakarta and consulates in Medan, Surabaya, and Denpasar. Vietnam has experienced rapid economic growth in the past 15 years, primarily because of economic reforms it began implementing in the late 1980s that transformed it from a centrally planned economy to a type of socialist market economy. Data from the World Bank show that Vietnam’s real GDP increased by around 137 percent from 2000 to 2014. Vietnam has also made great progress in reducing poverty since the 1990s, according to the World Bank. In 2012, the World Bank reported that about 2 percent of Vietnamese lived below the poverty line of $1.25 per day. The United States established diplomatic relations with Vietnam in 1950, after Vietnam achieved limited independence from France. The United States and Vietnam suspended diplomatic relations at the end of the Vietnam War in 1975 but restored them in 1995. Since then, common strategic and economic interests have led Vietnam and the United States to improve relations across a wide range of issues. In 2006, Congress passed a comprehensive trade and tax bill that granted Vietnam permanent normal trade relations. In July 2013, the United States and Vietnam established the United States–Vietnam Comprehensive Partnership, an overarching framework for advancing the bilateral relationship in areas such as economic engagement. In October 2014, the United States relaxed an arms embargo, which it had imposed on Vietnam in 1984, to permit Vietnamese acquisition of maritime military materiel. However, the United States continues to express concerns about Vietnam’s human rights record and designates Vietnam as a nonmarket economy in antidumping procedures. Vietnam has expressed opposition to aspects of U.S. trade policy, including U.S. restrictions on its export of catfish into the U.S. market. U.S. agencies’ stated goals for Vietnam include supporting Vietnam’s economic governance. The U.S. Embassy in Vietnam is located in Hanoi, and the U.S. Consulate General is in Ho Chi Minh City. For centuries, China and Vietnam have had a turbulent relationship that continues to be affected by long-standing territorial disputes in the South China Sea. China has claimed sovereignty over the South China Sea, illustrating its claims by marking its maps with a “nine dash line” that overlaps with Vietnamese claims and encircles most of the South China Sea, including the Paracels and Spratlys. During the Vietnam War, China served as a close ally of the North Vietnamese. In 1974, shortly before the war ended, China seized control of the Paracel Islands from the South Vietnamese. After the war, underlying tensions between the two countries surfaced and China-Vietnam relations deteriorated. China opposed Vietnam’s invasion of Cambodia in 1978, and following a series of disputes, the Chinese army crossed the Vietnamese border in February 1979 and fought a 2-week battle before the Chinese withdrew. In 1991, China and Vietnam renormalized relations. Since then, China and Vietnam have established close economic relations. In 2008, the two countries agreed to establish a comprehensive strategic partnership that enhanced cooperation in multiple areas, such as trade and investment. However, in May 2014, tensions were reawakened when China placed an oil rig near the disputed Paracel Islands, sparking widespread protests in Vietnam; some of these protests turned violent and included attacks on Chinese and Taiwanese individuals and firms. Despite continuing tensions, in April 2015, the leaders of both countries pledged to strengthen their partnership, for example, by increasing cooperation on infrastructure development. China maintains an embassy in Hanoi and a consulate in Ho Chi Minh City. The value of China’s total trade in goods with Indonesia surpassed the United States’ in 2005 and was more than double the United States’ in 2014, when Chinese imports and exports both exceeded U.S. imports and exports. The United States and China are Indonesia’s fifth and second-largest trading partners, respectively, while other ASEAN countries collectively represent Indonesia’s largest trading partner. Available data on U.S. and Chinese FDI, although limited, indicate that U.S. FDI greatly exceeded Chinese FDI in Indonesia from 2007 through 2012. However, Chinese FDI has significantly increased since 2010 and nearly reached U.S. levels of FDI in 2012. The value of China’s total trade in goods with Indonesia surpassed the United States’ in 2005 and was more than double the United States’ total trade in goods—$64 billion versus $28 billion, respectively—in 2014 (see fig. 2). China’s total goods trade in Indonesia increased in nominal terms every year after 2001 except 2008 and 2009, when the global economic crisis occurred, and 2013 and 2014, when Chinese imports of minerals from Indonesia declined. From 1994 through 2014, China’s total trade in goods in Indonesia grew much more rapidly than U.S. total trade in goods, with a slight decline in 2014. As figure 2 illustrates, from 1994 through 2014, China’s imports from, and exports to, Indonesia grew to exceed the United States’. Moreover, while the United States had a nearly continuous annual trade deficit with Indonesia during this period, China had an increasing trade surplus almost every year after 2007. Chinese imports from Indonesia surpassed U.S. imports from Indonesia in 2009 and increased significantly in 2010 and 2011. However, in 2013 and 2014, Chinese imports declined sharply, primarily because of a significant decrease in Chinese imports of minerals and slowing economic growth in China, according to an IMF report. The IMF report stated that in 2014, Indonesia implemented a ban of Indonesia’s raw mineral ore exports, requiring all raw mineral ores to be processed in Indonesia to increase domestic value added. Chinese exports to Indonesia surpassed U.S. exports in 2000 and continued to grow through 2014. The United States had a trade deficit with Indonesia every year from 1994 through 2014, with the deficit growing from $4.2 billion in 1994 to $11.1 billion in 2014. China had a trade deficit with Indonesia every year from 1994 through 2006 but, with the exception of 2011, had a trade surplus every year from 2007 through 2014. China’s trade surplus increased dramatically from 2012 through 2014, from $2.3 billion to $14.6 billion. From 2000 through 2014, the composition of U.S. and Chinese trade in goods with Indonesia remained relatively stable, except for a significant overall increase in China’s mineral imports that peaked in 2013. In 2014, textiles represented the largest share of U.S. imports (26 percent) while minerals represented the largest share of Chinese imports (42 percent). Animals, plants, and food represented the largest share of U.S. exports in 2014 (32 percent), and machinery represented the largest share of Chinese exports (33 percent). Most of China’s, and almost half of the United States’, trade in goods with Indonesia in 2014 consisted of goods for industrial use (i.e., goods, such as rubber and coal, used in the production of other goods). See appendix II for more information about the composition and use of U.S. and Chinese trade in goods with Indonesia. In 2013, other ASEAN countries collectively represented Indonesia’s largest trading partner in total trade in goods, followed by China, Japan, the European Union (EU), and the United States. Exports. Indonesia exported $16 billion in goods to the United States, its fifth-largest export market, and $23 billion in goods to China, its third-largest export market, in 2013. Other ASEAN countries, Japan, and the EU represented Indonesia’s first, second, and fourth-largest goods export markets, respectively. The United States’ share of total Indonesian goods exports decreased from 12.1 percent in 2003 to 8.6 percent in 2013, while China’s share of total Indonesian goods exports increased from 6.2 percent to 12.4 percent during the same period. Imports. Indonesia imported $9 billion in goods from the United States, its sixth-largest import market, and $30 billion in goods from China, its second-largest import market, in 2013. Other ASEAN countries, Japan, the EU, and South Korea represented Indonesia’s first-, third-, fourth-, and fifth-largest goods import markets, respectively. The United States’ share of total Indonesian goods imports decreased from 8.3 percent in 2003 to 4.9 percent in 2013. China’s share of total Indonesian goods exports increased from 9.1 percent in 2003 to 16 percent in 2013. Figure 3 shows Indonesia’s exports and imports in 2003, 2008, and 2013, by trading partner. Indonesia ranks higher as an export and import partner of China than of the United States. Indonesia is China’s 15th-largest export market and the United States’ 34th-largest by value. In 2014, China exported $39.1 billion in goods to Indonesia, or 1.7 percent of global Chinese goods exports. In the same year, the United States exported $8.3 billion in goods to Indonesia—0.5 percent of global U.S. goods exports. Indonesia is China’s 20th-largest source of imported goods and the United States’ 24th-largest by value. In 2014, China imported $24.5 billion in goods from Indonesia, or 1 percent of global Chinese goods imports. In the same year, the United States imported $19.4 billion in goods from Indonesia—0.8 percent of global U.S. goods imports. The United States’ role relative to China’s in Indonesia’s trade of goods as well as services may be greater when the amount of intermediate U.S. inputs to the traded goods and services is taken into account. Because of the nature of global supply chains, for example, a consumer phone from a U.S. company might be assembled in China but includes components manufactured by Germany, Japan, South Korea, and other countries. Data from the UN Commodity Trade database, which counts the full value of the export only for the exporting country, showed that in 2011, China exported $29.2 billion in goods to Indonesia, almost four times the $7.4 billion in goods that the United States exported to Indonesia. However, data from the Organisation of Economic Co-operation and Development (OECD) and the World Trade Organization (WTO), which attempt to account for value added to a finished export by each contributing country, show that China’s exports of value-added goods and services to Indonesia were around 1.8 times those of the United States. The OECD- WTO data suggest that Chinese exports to Indonesia contained a higher portion of components produced elsewhere than U.S. exports contained. Available data from the U.S. Bureau of Economic Analysis (BEA) indicate that U.S. trade in services with Indonesia totaled approximately $2.9 billion in 2013. The United States exported $2.2 billion in services to Indonesia in 2013, with travel and business services, respectively, as the largest and second-largest categories by value, and imported $692 million in services from Indonesia in 2013, with travel and business services, respectively, as the largest and second-largest categories by value. In 2013, total U.S.-Indonesian services trade represented 10 percent of the value of U.S.-Indonesian goods trade. China does not publish data on its trade in services with Indonesia. Data on FDI in Indonesia from the United States and China have limitations, in that these data may not accurately reflect the countries to which U.S. and Chinese FDI ultimately flows. For example, U.S. and Chinese data on FDI in Indonesia do not reflect investments by subsidiaries that U.S. and Chinese firms may set up in other countries and use to make investments in Indonesia. Conversely, U.S. and Chinese firms may set up subsidiaries in Indonesia that can be used to make investments in other countries. Given these limitations, available data show that U.S. FDI flows to Indonesia in 2007 through 2012 totaled about $10.2 billion, exceeding China’s reported FDI flows of about $2.7 billion. However, annual Chinese FDI flows increased significantly during this time, from $100 million in 2007 to $1.4 billion in 2012 in nominal terms (see fig. 4). According to BEA, over 90 percent of total U.S. FDI flows to Indonesia in 2007 through 2012 were concentrated in holding companies and mining. Data on U.S. and Chinese goods exports to Indonesia indicate that from 2006 through 2014, U.S. exports of goods to Indonesia were more similar to Japanese and EU exports than to Chinese exports, suggesting that the United States is more likely to compete directly with Japan and EU countries than with China. Figure 5 presents a commonly used index for assessing the similarity of the United States’ goods exports to Indonesia to those of China and other countries. Data from Commerce’s Advocacy Center, the World Bank, and ADB provide some information about Indonesian government contracts that U.S. and Chinese firms competed for or won. Although these data represent a small share of U.S. and Chinese economic activity in Indonesia, they offer insights into the degree of competition between U.S. and Chinese firms for the projects represented. These data indicate that U.S. firms in Indonesia have competed more often with firms from other countries than with Chinese firms and have tended to win contracts in different sectors. Commerce Advocacy Center. Data from Commerce’s Advocacy Center show that U.S. firms that the center supported in fiscal years 2009 through 2014 competed for Indonesian government contracts most often, and for highest total contract value, with French firms, followed by Chinese firms and firms from other countries (see table 1). According to the center’s data, Chinese firms competed with the U.S. firms for 8 of 32 contracts covering a range of sectors, including energy and power; defense; transportation; telecommunications; and computers, information technology, and security. The 8 contracts for which Chinese firms competed had a total value of $3.6 billion—34 percent of the $10.4 billion in total contract value for which the U.S. firms competed. In contrast, French firms competed against U.S. firms for 11 contracts with a total value of about $8.3 million. World Bank. From 2000 through 2014, U.S. and Chinese firms won a relatively small share of World Bank-financed contracts in Indonesia and tended to win contracts in different sectors. U.S. and Chinese firms won a combined $33 million (1.1 percent) of the $2.94 billion in total contract dollars that the World Bank awarded in Indonesia. Of the $26 million that U.S. firms won, $24 million (94 percent) was for consultant services and the remainder was for goods. In contrast, of the $7 million contract dollars that Chinese firms won, $6.9 million (96 percent) was for goods. Indonesian firms won $2.54 billion (86 percent) of the World Bank’s total contract dollars, while Japanese, French, Korean, and Australian firms won a combined $267 million (9 percent). ADB. U.S. firms won a small share of ADB contracts in Indonesia in 2013 and 2014, while Chinese firms won no ADB contracts. During this period, U.S. firms won three ADB contracts for a combined $10 million of the $410 million in total contract dollars that ADB awarded in Indonesia. One of the three contracts was for a geothermal power project, and the other two were consulting contracts worth less than $0.5 million each. U.S. agencies and private sector representatives have cited multiple challenges to trading and investing in Indonesia. Restrictive regulatory environment. According to officials from the Office of the U.S. Trade Representative (USTR), Indonesia’s regulatory environment constitutes the biggest market access barrier for U.S. firms. In 2014 and 2015, USTR reported that Indonesia’s trade and investment climate was characterized by, among other things, growing protectionism toward local business interests. According to the USTR reports, in recent years, Indonesia has enacted numerous regulations on imports, such as those relating to local content and domestic manufacturing requirements, which have increased the burden for U.S. exporters. In 2013, the United States initiated a WTO dispute settlement process with Indonesia because of Indonesia’s import licensing restrictions on horticulture and meat products.A representative of one U.S. firm whom we spoke with in Indonesia said that the firm had stopped importing soybeans into Indonesia for about a year because of Indonesian quotas, rising import taxes, and local origination requirements. Moreover, according to an official representing an American regional trade association, regulations may appear without advance notice or consultations with affected industries and may not be uniformly enforced. In addition, USDA’s 2014 Country Strategy Statement for Indonesia states that market access challenges for U.S. exports to Indonesia, such as Indonesia’s import licensing requirements, have dominated the U.S.- Indonesia bilateral relationship. The World Bank’s 2015 ease of doing business ranking of 189 economies, where a ranking of 1 indicates the most business-friendly regulations relative to other countries in the rankings, ranked Indonesia at 114. Indonesia ranked least favorably in enforcing contracts (172) and most favorably in ensuring protections for minority investors (43). In assigning the ranking, the World Bank said that Indonesia implemented reforms that reduced the tax burden on companies and made it easier for them to start a business and obtain access to electricity. Corruption. Although the Indonesian government investigates and prosecutes high-profile corruption cases, many investors consider corruption a significant barrier to doing business in Indonesia, according to USTR’s 2015 report on foreign trade barriers. A representative of one U.S. firm told us that after paying taxes to the Indonesian government, the firm may be asked to pay additional fines. U.S. firms and representatives of American regional trade associations also noted that while U.S. firms are bound by U.S. law not to engage in corrupt practices, some of the firms’ competitors do not face similar restrictions. Transparency International’s 2014 Corruption Perceptions Index ranked Indonesia at 107 of 175 countries and territories, where a ranking of 1 indicates the lowest perceived level of public sector corruption relative to other countries in the index. Weak infrastructure. Indonesia has weak and underdeveloped public infrastructure, such as ports, rail, and land transport, which increases transaction costs and inefficiencies and hampers exporters and investors, according to a report by Commerce and State. A representative of a private sector consulting firm operating in Indonesia said that Indonesia has poor infrastructure for transporting goods from factories to port. According to a State official, Indonesia’s economic growth is not likely to increase without significant investment in infrastructure. Violations of intellectual property rights. In 2015, USTR reported that Indonesia was one of 13 countries designated as a Priority Watch List country because of particular problems with respect to intellectual property rights protection, enforcement, or market access for persons relying on such rights. According to the report, the United States is concerned that, among other things, Indonesia’s efforts to enforce intellectual property rights have not been effective in addressing rampant piracy and counterfeiting. Limited access to land. An absence of clear Indonesian laws regarding the acquisition and use of land by investors has slowed infrastructure development projects, according to a State document. For example, the document stated that construction on a hydroelectric dam in West Java, although nearly complete as of January 2015, had been delayed because of land use disputes. A new regulation on land use is scheduled to go into effect in 2015, but a State document noted that this law is untested and that implementation may be erratic, especially in its initial years. Although the United States is engaging economically with Indonesia, the two countries have no free trade agreement (FTA), while China has both trade and investment agreements with Indonesia through its agreements with ASEAN countries. Also, the United States is not negotiating any existing or proposed regional trade agreements with Indonesia, whereas China is engaging Indonesia through a proposed regional trade agreement. Both the United States and China support their domestic firms in Indonesia through financing and other means, although U.S. agencies estimate that Chinese financing has greatly exceeded U.S. financing. The United States and China also have provided support for economic development, with U.S. efforts focused on capacity building and Chinese efforts focused on physical infrastructure development. The United States has not established an FTA with Indonesia, although the two countries have a limited trade framework agreement to facilitate trade relations. The United States–Indonesia Trade and Investment Framework Agreement (TIFA) is intended to facilitate discussions of trade and investment issues. In contrast to FTAs, TIFAs are short agreements that provide strategic frameworks and structure for dialogue on trade and investment issues and prepare countries for eventual accession to high- standard trade agreements. The United States–Indonesia TIFA was signed in 1996 by USTR and Indonesia’s Ministry of Trade. According to USTR, U.S. officials meet regularly with Indonesian officials in both formal TIFA meetings and informal meetings to address bilateral trade and investment issues. The last two formal meetings that U.S. and Indonesian officials held under the TIFA occurred in September 2015 and June 2013, according to USTR. In the September 2015 meeting, officials discussed a range of issues, such as policies related to the information and communications technology sector and Indonesia’s Economic Policy Package. In addition, in June 2015, Congress reauthorized the Generalized System of Preferences (GSP), which provides duty-free treatment for 3,500 tariff lines from many developing countries, including Indonesia, through the end of 2017. According to a report by the Congressional Research Service, in 2012—the last full year of GSP implementation—Indonesia ranked fourth of 127 beneficiary countries in the value of U.S. imports that entered duty free through GSP. According to data in the report, of the $18 billion in U.S. imports from Indonesia in 2012, about 12 percent, or $2.2 billion, entered the United States duty free through GSP. In contrast, China has trade and investment agreements with Indonesia through the China-ASEAN Framework Agreement on Comprehensive Economic Cooperation. The China-ASEAN Framework Agreement on Comprehensive Economic Cooperation comprises a series of agreements on trade and investment to expand access to each other’s markets. The China-ASEAN Trade in Goods Agreement, which entered into force in 2005, is intended to give China and Indonesia, as well as other ASEAN countries, tariff-free access to each other’s market for many goods and reduced most duties for Indonesia’s trade in goods with China to zero by 2012. According to a study by the ADB, in 2010, the average tariff on exports from six ASEAN countries, including Indonesia, to China was 0.1 percent, while the average tariff on Chinese exports to Indonesia was 0.6 percent. The China-ASEAN Trade in Services Agreement, which entered into force in 2007, is intended to provide market access in agreed-on sectors of China and Indonesia, as well as other ASEAN countries, to foreign companies and firms that is equivalent to domestic service providers’ market access in their own countries. The China-ASEAN Investment Agreement, which entered into force in 2010, committed China and Indonesia, as well as other ASEAN countries, to treat each other’s investors as equal to their domestic investors. Selected studies have projected that the China-ASEAN Trade in Goods Agreement generally increases trade for China and Indonesia and improves Indonesia’s economy. All but one of these studies also estimated that the agreement improves China’s economy. In addition, one study estimated that the agreement increases investment in China and Indonesia. In August 2014, China and Indonesia, as well as the other ASEAN countries, announced discussions to upgrade these agreements. In August 2015, China’s Commerce Minister announced that China and ASEAN had agreed to the goal of finalizing negotiations to upgrade these agreements by the end of 2015. Although the United States has concluded negotiations for a regional trade agreement known as the Trans-Pacific Partnership (TPP), Indonesia was not a party to these negotiations. In contrast, China and Indonesia are both parties to ongoing negotiations for the Regional Comprehensive Economic Partnership Agreement (RCEP), which negotiating parties have said they hope to complete in 2015. Indonesia’s trade with China and with the 14 other countries negotiating RCEP represented 66 percent of its total trade in goods in 2013. RCEP negotiating parties seek to expand access to trade and investment among the parties by combining their existing FTAs into a single comprehensive agreement. The United States is not a party to the RCEP negotiations. Our analysis of U.S. agency data showed that in fiscal years 2009 through 2014, the Export-Import Bank of the United States (Ex-Im) and the Overseas Private Investment Corporation (OPIC) provided about $2.5 billion in financing to support U.S. exports to, and investment in, Indonesia (see table 2). Although China does not publish data on its financing in Indonesia, our analysis of State data found that China has financed at least $36.4 billion in investment projects in Indonesia since 2009. Our analysis of Ex-Im and OPIC information for fiscal years 2009 through 2014 found the following. Ex-Im authorized about $2.4 billion in loans, loan guarantees, and insurance to support U.S. exports to Indonesia during this period. Ex-Im’s authorizations in Indonesia consisted mostly of loan guarantees. Ex-Im authorized its two largest loan guarantees in fiscal years 2011 and 2013, when it authorized more than $1.6 billion in guarantees for the purchase of commercial aircraft. OPIC committed about $86 million in financing to U.S. investment projects in Indonesia during this period.OPIC’s largest commitment in Indonesia consisted of a $50 million investment guarantee in fiscal year 2013 for a facility to help expand lending to small and medium- sized enterprises investing in Indonesia. China does not publish data on its financing for exports, imports, and investment in Indonesia by private and state-owned enterprises, but State reported that China has made available at least $36.4 billion in financing for investment projects in Indonesia since 2009. According to State, Chinese financing is generally offered in the form of soft loans by China’s Development Bank and Export-Import Bank. For example, State reported that in 2013, China’s Export-Import Bank financed a $6 billion coal mining infrastructure and transportation project in Papua and Central Kalimantan. In April 2015, China’s President reiterated China’s commitment to provide financing in support of Indonesia’s infrastructure and connectivity development. State, Commerce, and USDA maintain staff in Indonesia to provide export promotion services and to advocate for policies favorable to U.S. firms operating in Indonesia. State. State maintains an Economic and Environment Section at the U.S. Embassy in Jakarta that is organized into three focus areas: environment, science, technology, and health; trade and investment; and macroeconomics and finance. According to State officials, improving economic relations with Indonesia to facilitate greater U.S. trade and investment is a key priority of the section. Commerce. According to a senior Commerce official in Indonesia, Commerce personnel based in Indonesia work to help U.S. firms find local partners, obtain the appropriate licenses and registrations for conducting business in Indonesia, and interpret existing or new laws and regulations, among other duties. The officials said that they also advocate for U.S. firms and lead or support trade missions. For example, Commerce officials led a trade mission focused on clean energy business practices in 2010 and led a trade mission focused on education in 2011. USDA. USDA personnel in Indonesia offer U.S. firms assistance with market access and market development issues, according to a USDA official. For example, according to the official, when Indonesia restricted imports on all U.S. live and processed poultry in response to an avian flu outbreak in Washington and Oregon in late 2014, USDA personnel worked with Indonesia to lift the restriction for U.S. poultry not affected by the outbreak. USDA also cooperates with industry commodity groups and provides market intelligence reports to U.S. firms, according to the official. The Chinese government has pursued agreements with Indonesia to support Chinese firms that do business there. For example: Special economic zones. China’s Ministry of Commerce has worked with Indonesia to establish at least one special economic zone to facilitate cross-border trade and investment, according to Chinese embassy websites. According to the Chinese Ministry of Commerce, the government of China supports Chinese firms that establish and invest in a zone by offering financing and facilitating movement of materials, equipment, labor, and foreign exchange between China and the zone. In establishing these zones, China negotiates with Indonesia and other host governments in the areas of tax, land, and labor policies to support firms that choose to invest in the zones. Currency swaps. China has facilitated cross-border trade in local currencies in Indonesia through the establishment and renewal of a bilateral currency swap arrangement totaling 100 billion Chinese yuan, according to the Central Bank of Indonesia’s website. The bank’s website states that the arrangement promotes bilateral trade and direct investment for economic development between the two countries and helps guarantee stabilized financial markets by ensuring the availability of short-term liquidity. The People’s Bank of China and the Central Bank of Indonesia established the arrangement in March 2009 and renewed it in October 2013 for 3 more years. The United States has fostered economic development in Indonesia through assistance to strengthen governance and energy development. In fiscal years 2009 through 2013, U.S. agencies provided about $373 million in trade capacity building assistance—that is, development assistance intended to improve a country’s ability to benefit from international trade—to Indonesia. U.S. trade capacity building assistance to Indonesia has supported initiatives aimed at, among other things, providing economic policy advisory services to the Indonesian government; strengthening key trade and investment institutions; improving Indonesia’s competiveness in global supply chains; and strengthening the capacity of the government Indonesia to analyze, negotiate, and implement bilateral and multilateral trade agreements. The majority of U.S. trade capacity assistance provided to Indonesia during this period—about 90 percent—was committed as part of a 5-year, $600 million Millennium Challenge Corporation (MCC) compact with Indonesia for a project that is designed to help the government of Indonesia to, among other things, increase productivity and reduce reliance on fossil fuels. (For more information about U.S. trade capacity building assistance to Indonesia, see app. IV.) The United States has also sought to ensure affordable, secure, and cleaner energy supplies in Indonesia and across the Asia-Pacific region through the U.S.-Asia Pacific Comprehensive Energy Partnership with Indonesia, which, according to State, was launched in 2012. China has assisted economic development in Indonesia by supporting Indonesia’s connectivity and infrastructure development as well as its role in regional initiatives. According to a joint statement issued by Chinese President Xi Jinping and Indonesia’s President Widodo in April 2015, China plans to support Indonesia’s infrastructure and connectivity development by providing financing for railways, highways, ports, docks, dams, airports, and bridges, among other things. According to a speech by a senior Chinese official posted on a Chinese embassy website, the power plants built by Chinese firms make up one-quarter of Indonesia’s power supply, and Chinese firms have built Indonesia’s longest cross-sea bridge to facilitate the transport and flow of commerce between the Java and Madura Islands. State reported that between 2006 and 2015, China undertook six power plants, including two coal-fired power plants and a $17 billion, 7,000-megawatt hydropower plant; three rail projects; and a coal mining infrastructure and transportation project. China’s Foreign Minister has publicly stated that Indonesia is the most important partner in its 21st Century Maritime Silk Road Initiative, which, according to a document released by the Chinese government in March 2015, aims to improve maritime cooperation and regional connectivity. In November 2014, China announced the creation of a $40 billion Silk Road Fund to help implement this initiative. In addition, Indonesia is one of 57 prospective founding members of China’s proposed Asian Infrastructure Investment Bank, an international institution to finance infrastructure projects throughout the Asia-Pacific region. Under the bank’s initial agreement, the bank’s authorized capital is $100 billion, of which China has pledged $29.8 billion and Indonesia has pledged $3.4 billion. Bank documents indicate that the bank anticipates beginning operations before the end of 2015. The value of China’s total trade in goods with Vietnam surpassed that of the United States in 2007 and was more than double the value of the United States’ total trade in goods with Vietnam in 2014. However, U.S. imports from Vietnam exceed Chinese imports, while China’s exports to Vietnam exceed the United States’. The United States is Vietnam’s fourth largest trading partner, and China is Vietnam’s largest trading partner. Available data on U.S. and Chinese FDI, although limited, indicate that Chinese FDI in Vietnam from 2007 through 2012 was more than double U.S. FDI in Vietnam during this time. The value of China’s total trade in goods with Vietnam surpassed the United States’ in 2007, and the gap has continued to grow. In 2014, China’s total goods trade with Vietnam was $83.6 billion, while the United States’ was $36.3 billion (see fig. 6). According to Vietnamese and U.S. government officials, an unknown amount of Chinese-Vietnamese trade occurs across the countries’ porous border and outside official channels. Figure 6 illustrates the following: From 1994 through 2014, the United States’ imports from Vietnam exceeded China’s every year except 1994, 1995, and 2000. Chinese exports grew faster than U.S. exports from 1994 through 2014. The United States had an annual trade deficit with Vietnam from 1997 through 2014, while China had an annual trade surplus with Vietnam from 1994 through 2014. Both the U.S. deficit and Chinese surplus have accelerated in recent years. From 2000 through 2014, the composition of U.S. and Chinese total trade in goods with Vietnam shifted from predominantly raw commodities to manufactured goods. In 2014, textiles represented the largest share of U.S. imports from Vietnam (31 percent) and machinery represented the largest share of Chinese imports from Vietnam (47 percent). Animals, plants, and food represented the largest share of U.S. exports to Vietnam (36 percent) in 2014, while machinery represented the largest share of Chinese exports to Vietnam (31 percent). In 2014, the majority of U.S. imports from Vietnam consisted of goods for consumer use, such as wooden bedroom furniture. The majority of U.S. exports to Vietnam and of Chinese imports from, and exports to, Vietnam in 2014 consisted of goods for industrial use, which are used in the production of other goods, such as microchips. See appendix III for more information about the composition and use of the United States’ and China’s trade in goods with Vietnam. China and the United States are Vietnam’s largest and fourth-largest trading partners, respectively, in terms of their combined exports and imports of goods. Other ASEAN countries and the EU are Vietnam’s second and third-largest trading partners. Exports. In 2013, Vietnam exported $24 billion in goods to the United States and $13 billion in goods to China. After the EU, the United States was the second-largest market for Vietnamese goods exports, while China was the fifth-largest market for Vietnamese goods exports in 2013. In both 2004 and 2013, the United States’ share of Vietnam’s exports was around 18 to 19 percent. China’s share of Vietnam’s exports was around 10 percent in both 2004 and 2013. Imports. Vietnam imported $5 billion in goods from the United States, its seventh-largest import market, and $37 billion in goods from China, its largest import market, in 2013. Other ASEAN countries, South Korea, Japan, Taiwan, and the EU represented Vietnam’s second-, third-, fourth-, fifth-, and sixth-largest goods import markets, respectively, in 2013. In both 2004 and 2013, the United States’ share of Vietnam’s imports was around 3 to 4 percent. China’s share of Vietnam’s imports increased significantly during the same period, from 14 percent in 2004 to 28 percent in 2013. Figure 7 shows Vietnam’s exports and imports by trading partner in 2004, 2008, and 2013. Vietnam is a larger export market for China than the United States, but is a larger source of imported goods for the United States than it is for China. Vietnam was China’s seventh-largest export market by value in 2014 but the United States’ 44th-largest. In 2014, China exported $63.7 billion in goods to Vietnam, which accounted for 2.7 percent of China’s global goods exports. In the same year, the United States exported $5.7 billion in goods to Vietnam, which accounted for 0.4 percent of total U.S. global goods exports. Vietnam was China’s 26th-largest source of imported goods by value in 2014 and was the United States’ 15th-largest. In 2014, China imported $19.9 billion in goods from Vietnam, which accounted for 1.0 percent of China’s global goods imports. In the same year, the United States imported $30.6 billion in goods from Vietnam, which accounted for 1.3 percent of total U.S. goods imports from the world. The United States’ role relative to China’s in Vietnam’s trade of goods as well as services may be greater when the amount of intermediate U.S. inputs to the traded goods and services is taken into account. Because of the nature of global supply chains, for example, a consumer phone from a U.S. company might be assembled in China but include components manufactured by Germany, Japan, South Korea, and other countries. Data from the UN Commodity Trade database, which counts the full value of an export for only the exporting country, showed that China exported $29.1 billion in goods to Vietnam in 2011, almost seven times the $4.3 billion in goods that the United States exported to Vietnam that year. However, data from the OECD and the WTO, which attempt to account for the value added to a finished export by each contributing country, show that China exported only about 2.5 times more in value-added goods and services to Vietnam than the United States did. The OECD- WTO data suggest that Chinese exports to Vietnam contained a higher portion of components produced elsewhere than did U.S. exports. Our analysis of data from BEA and other sources on U.S. trade in services in Vietnam provides broad estimates rather than precise values. However, our calculations indicate that U.S. total trade in services with Vietnam totaled approximately $3.1 billion in 2012. Our analysis shows that the United States exported approximately $1.7 billion in services to Vietnam in 2012, with (1) business, professional, and technical services and (2) education as the largest and second-largest service categories by value, and imported approximately $1.4 billion in services from Vietnam in 2012, with (1) travel and passenger fares and (2) transportation services as the largest and second-largest service categories by value. In 2012, the value of U.S.-Vietnamese services trade was about 12 percent of the value of U.S.-Vietnamese goods trade. China does not publish data on its trade in services with Vietnam. Data on FDI in Vietnam from the United States and China have limitations, in that these data may not accurately reflect the countries to which U.S. and Chinese FDI ultimately flows. For example, U.S. and Chinese firms may set up subsidiaries in other countries, which are then used to make investments in Vietnam. Such investments would not be captured by U.S. and Chinese data on FDI in Vietnam. Conversely, U.S. and Chinese firms can set up subsidiaries in Vietnam, which can be used to make investments in other countries. Given these limitations, available data show that from 2007 through 2012, China’s reported FDI flows to Vietnam totaled approximately $1.2 billion, more than twice the U.S. FDI flows of approximately $500 million. During this period, China’s reported annual FDI flows to Vietnam fluctuated but continued to exceed U.S. FDI flows every year except 2009 (see fig. 8). Although BEA does not publicly report data on U.S. FDI flows to Vietnam by type of investment, information that BEA provided to us indicates that from 2003 through 2013, on average, one-third of total U.S. FDI stock in Vietnam was in mining and manufacturing. Mining increased from 22 percent of U.S. FDI stock in Vietnam in 2003 to more than 50 percent in 2013, while manufacturing’s share of total U.S. FDI stock in Vietnam fell from a high of 60 percent in 2006 to 28 percent in 2013. According to officials from Vietnam’s Ministry of Agriculture and Rural Development, Chinese investment projects are mostly in the industrial, manufacturing, and construction sectors. Data on U.S. and Chinese goods exports to Vietnam indicate that since 2008, U.S. exports of goods to Vietnam have been more similar to Japanese and EU exports than to Chinese exports, suggesting that the United States is more likely to compete directly with Japan and EU countries than with China. Figure 9 presents a commonly used index for assessing the similarity of the United States’ goods exports to Vietnam to those of China and other countries. Data from Commerce’s Advocacy Center, the World Bank, and the ADB provide some information about Vietnamese government contracts that U.S. and Chinese firms competed for or won. Although these data represent a small share of U.S. and Chinese economic activity in Vietnam, they offer insights into the degree of competition between U.S. and Chinese firms for the projects represented. These data indicate that U.S. firms in Vietnam have competed more often with firms from other countries than with Chinese firms and have tended to win contracts in different sectors. Commerce’s Advocacy Center. Data from Commerce’s Advocacy Center show that U.S. firms that the center supported in fiscal years 2009 through 2014 competed for Vietnamese government contracts more often, and for higher total contract value, with firms from Japan, South Korea, and several other countries than with Chinese firms (see table 3). According to the center’s data, Chinese firms competed with U.S. firms for 3 of 29 contracts, in the areas of energy and power, infrastructure, and services. These 3 contracts’ total value was $92 million—3 percent of the $28.8 billion in total contract value for which the U.S. firms competed. In contrast, Japanese and South Korean firms competed against U.S. firms for 10 and 6 contracts, respectively, with a combined value of more than $11 billion for each country. World Bank. From 2000 through 2014, U.S. and Chinese firms generally won World Bank-financed contracts in Vietnam in different sectors. Vietnamese firms received about $4.3 billion (70 percent) of the $6.1 billion in total contract value. Among firms from other countries, Chinese firms won the highest total contract value—$531 million—almost 9 percent of the total World Bank-financed contract value. The United States won $133 million, about 2 percent of the total World Bank-financed contract value. Most of the contract dollars won by Chinese firms were for civil works (71 percent) and goods (28 percent). In contrast, most of the contract dollars won by U.S. firms— $118 million (89 percent)—were for consultant services. Electrical equipment was the only category of procurement in which both U.S. and Chinese firms won more than $2 million in contract value. Chinese firms won $140 million, and U.S. firms won $14 million, in contract value for electrical equipment for World Bank projects in Vietnam. ADB. U.S. firms won one ADB contract in Vietnam in 2013 and 2014—a $130,000 contract for consulting services related to water conservation. During this period, Chinese firms won 15 contracts valued at more than $250 million. The Chinese firms’ contracts included about $207 million for the construction of roads and a hydropower plant, with the remainder for goods for electricity transmission, distribution, and renewable energy. U.S. agencies and private sector representatives have articulated multiple challenges to trading and investing in Vietnam. Restrictive regulatory environment. A lack of transparency in the Vietnamese government’s policies and decisions and slowness of government action are creating challenges for U.S. firms, according to State and Commerce. In addition, one U.S. business owner we spoke with in Vietnam described the regulatory environment he dealt with as “arcane, corrupt, and labyrinthine.” According to a State and Commerce report, Vietnam has established regulations that limit the operations of foreign companies in the Vietnamese market. For example, unless a foreign company has an investment license permitting it to directly distribute goods in Vietnam, the company must appoint a local authorized agent or distributor. USTR also reports that Vietnamese government restrictions on certain types of imports, such as used consumer goods, machinery and parts, and some agricultural commodities, affect U.S. firms’ ability to operate in Vietnam. The World Bank’s 2015 Ease of Doing Business Index ranked Vietnam at 78 of 189 economies, where a ranking of 1 indicates the most business-friendly regulations relative to those of other countries in the index. The 2015 index ranked Vietnam most favorably on dealing with construction permits (22) and least favorably on paying taxes (173). In 2015, according to the World Bank, Vietnam implemented reforms that made paying taxes less costly for companies and improved its credit information system. Corruption. Reports by USTR, Commerce, and State cite corruption as a significant barrier faced by U.S. and other foreign firms in Vietnam. In addition, the owner of one small U.S. enterprise whom we spoke with in Vietnam said that onerous audit requirements and paperwork, such as the thick dossier required for obtaining an investment license, created barriers to trading and investing in Vietnam as well as opportunities for corruption. Transparency International’s 2014 Corruption Perceptions Index ranked Vietnam at 119 of 175 countries and territories, where a ranking of 1 indicates the lowest perceived level of public sector corruption relative to other countries in the index. Weak infrastructure. State and Commerce reports cite poorly developed infrastructure, such as electrical and Internet infrastructure, as a challenge for U.S. firms doing business in Vietnam. In 2015, State reported that Vietnam needs an estimated $170 billion in additional infrastructure development in areas such as power generation, roads, railways, and water treatment to meet growing economic demand. According to a representative of one U.S. firm whom we spoke with in Vietnam, the capacity of Haiphong Harbor, a port near Hanoi, was so poor that the firm chose to ship goods to other Vietnamese ports and reload them onto smaller coastal vessels at an increased cost to avoid Haiphong. In addition, a representative of a U.S. clothing manufacturer in Vietnam noted that the capacity of Vietnam’s electrical grid is weak. As a result, the Vietnamese government occasionally institutes controlled brownouts—generally on days when the garment manufacturing plants are not operating—to try to alleviate strain on the electrical grid. According to the clothing manufacturer’s representative, any expansion of the garment industry could be limited without additional electrical capacity. Violations of intellectual property rights. In 2015, USTR reported that Vietnam remained designated as a Watch List country because of concerns about intellectual property rights violations and theft. According to USTR, online piracy and sales of counterfeit goods are common; in addition, Vietnamese firms manufacture counterfeit goods. Moreover, Vietnam’s capacity to enforce criminal penalties against counterfeiters is limited. Commerce similarly cited ineffective protection of intellectual property as a significant challenge. In addition, a representative of a technology company whom we spoke with in Vietnam stated that only 1 in 20 users of the company’s software were paying for its use and that Vietnamese consumers knowingly purchase counterfeits. Predominance of state-owned enterprises. According to a Commerce and State report about Vietnam’s business environment, state-owned enterprises dominate some sectors of the Vietnamese economy and receive some trade advantages over foreign firms. For example, according to the report, state-owned enterprises dominate the oil and gas, electricity, mining, and banking sectors, among others. The top three telecommunications companies in Vietnam are also state-owned enterprises and control nearly 95 percent of the Vietnam telecommunications market. Similarly, a private sector representative we spoke with in Vietnam stated that the Vietnamese government controls approximately 80 percent of Vietnam’s insurance market. Moreover, according to a 2015 USTR National Trade Estimates Report on Foreign Trade Barriers, Vietnam’s state-owned trading enterprises have been given the exclusive right to import certain products, including tobacco products; crude oil; newspapers, journals, and periodicals; and recorded media. In addition, since U.S. and other foreign firms are restricted from majority ownership in some sectors, including telecommunications and banking, they must partner with a domestic firm—generally a state-owned enterprise—to conduct business in these sectors. However, Commerce and State have reported that few Vietnamese firms, including state-owned enterprises, are audited against international standards and, as a result, U.S. firms have difficulty verifying the financial information of prospective partners. Shortages of skilled labor. Commerce and State reporting cited shortages of skilled labor as constraints to U.S. firms. In addition, a representative of one firm whom we interviewed in Vietnam noted that a lack of skilled labor in engineering limited the firm’s ability to support the modernization of factory equipment. The United States has no FTA with Vietnam but both are participants in the proposed regional TPP agreement, along with other countries. In contrast, China has free trade and investment agreements with Vietnam through its agreements with ASEAN countries and is negotiating the proposed RCEP agreement with Vietnam and other countries. Both countries support their domestic firms in Vietnam through financing and other means, but U.S. agencies estimate that China has provided a larger amount of financing than the United States. In addition, the United States and China have each supported economic development in Vietnam, with U.S. efforts focused on capacity building to improve Vietnam’s economic governance and Chinese efforts focused on improving physical infrastructure and connectivity. While the United States does not have an FTA with Vietnam, the two countries have a bilateral trade agreement (BTA) to facilitate their trade relations. The United States–Vietnam BTA, which the United States signed in 2000, enabled the establishment of normal trade relations with Vietnam—significantly reducing tariffs for many Vietnamese exports—and incorporated elements modeled on WTO agreements. As a result of the BTA, according to a 2014 study, the average U.S. tariff for Vietnamese manufacturing exports, such as textiles, fell from 33.8 percent to 3.4 percent. According to the U.S.-Vietnam Trade Council, under the BTA, Vietnam agreed to reduce tariffs, typically by one-third to one-half, on a broad range of products of interest to U.S. businesses, including toiletries, film, mobile phones, tomatoes, and grapes. USTR officials stated that the BTA remains in effect and contains some provisions beyond those required by the WTO. Since Vietnam joined the WTO, the majority of U.S. exports of manufactured and agricultural goods have faced Vietnamese tariffs of 15 percent of less, according to a USTR Trade Fact Sheet. However, according to a report by Commerce and State, U.S. businesses have noted that eliminating high tariffs on certain agricultural and manufactured goods, including fresh food, fresh and frozen meats, and materials and machinery, would create significant new opportunities. In contrast, China has free trade and investment agreements with Vietnam through the ASEAN-China Comprehensive Economic Cooperation Agreement. The China-ASEAN Framework Agreement on Comprehensive Economic Cooperation comprises a series of agreements, on trade in goods, trade in services, and investment, to expand China’s and ASEAN countries’ access to each other’s markets. The China-ASEAN Trade in Goods Agreement, which entered into force in 2005, is intended to give China and Vietnam, as well as other ASEAN countries, tariff-free access to each other’s markets for many goods and will reduce most duties for Vietnam’s trade in goods with China to zero by 2018. According to a study by the ADB, the average tariff on ASEAN countries’ exports to China was 0.1 percent in 2010, and 90 percent of Chinese exports are expected to face no tariffs in Vietnam by 2015. In January 2015, Vietnam’s Ministry of Finance stated that it had implemented the commitments it had made in the agreement to reduce tariffs. The China-ASEAN Trade in Services Agreement, which entered into force in 2007, is intended to provide market access in agreed-on sectors of China and Vietnam, as well as other ASEAN countries, to foreign companies and firms located in participant countries that is equivalent to domestic service providers’ market access in their own countries. The China-ASEAN Investment Agreement, which entered into force in 2010, is intended to commit China and Vietnam, as well as other ASEAN countries, to treat each other’s investors as equal to domestic investors. Selected studies have projected that the China-ASEAN Trade in Goods Agreement generally increases trade for China and Vietnam. All but two of these studies also estimated that the agreement improves the economies of both China and Vietnam. In addition, one study estimated that the agreement increases investment in China and Vietnam. In August 2014, China and Vietnam, as well as the other ASEAN countries, announced discussions to upgrade these agreements. The second round of discussions, held in February 2015, focused on investment, economic cooperation, and other areas. In August 2015, China’s Commerce Minister announced that China and ASEAN had agreed to the goal of finalizing negotiations on the upgrade by the end of 2015. The United States and Vietnam are participants in the proposed TPP, while China and Vietnam are participants in the ongoing RCEP negotiations. TPP. The United States, Vietnam, and 10 other countries have negotiated the TPP, with an agreement announced in October 2015. TPP negotiating parties agreed in 2011 that the TPP would address ensuring a competitive business environment and protecting the environment, labor rights, and intellectual property rights, among other issues. China is not a party to the TPP negotiations. RCEP. China, Vietnam, and 14 other countries are parties to the RCEP negotiations, which negotiating partners have said they hope to complete in 2015. Vietnam’s trade with the other countries negotiating RCEP, including China, represented 58 percent of its total trade in goods for 2013. RCEP negotiating parties seek to expand access to trade and investment among the parties by combining their existing FTAs into a single comprehensive agreement. The United States is not a party to the RCEP negotiations. Vietnam has embraced TPP as part of its overall efforts to increase trade and access to foreign markets, particularly in the United States, according to State officials. State officials noted that Vietnam will need to overcome several challenges to meeting TPP requirements. In addition, according to State officials, TPP’s labor and alternative dispute resolution requirements may be challenging for Vietnam to implement. However, State officials noted that Vietnam has shown a commitment to improving its economic governance. According to U.S. officials, the dispute between Vietnam and China over China’s placement of an oil rig near the disputed Paracel Islands in May through July 2014 briefly disrupted Chinese and Vietnamese trade. The officials noted that the incident also highlighted for Vietnamese officials the importance of their economic relationship with China and the need to diversify Vietnam’s trade. According to State officials, China responded to Vietnamese riots and attacks on Chinese firms and individuals by slowing customs procedures and tightening controls at the typically porous China- Vietnam border. According to U.S. officials, after the riots, Vietnam reviewed its economic relationship with China but found that it could not afford to reduce its reliance on China. For example, according to the U.S. officials, Vietnamese officials had not known exactly how intertwined Vietnam’s economy was with China’s because of the amount of undocumented cross-border trade. According to testimony before the U.S.-China Economic and Security Review Commission in May 2015, Vietnam relies on China for a number of intermediate goods as inputs for its exports; therefore, any disruptions to trade flows could spread throughout the Vietnamese economy. Our analysis of U.S. agency data showed that in fiscal years 2009 through 2014, Ex-Im and OPIC provided approximately $205 million in financing for exports to, and investment in, Vietnam (see table 4). Although China does not publish data on its financing in Vietnam, our analysis of State-reported data found that China has financed at least $4.5 billion in investment projects in Vietnam since 2008. Our analysis of Ex-Im and OPIC information for fiscal years 2009 through 2014 found the following. Ex-Im authorized about $148.9 million in loans, loan guarantees, and insurance to support U.S. exports in Vietnam. In fiscal year 2012, Ex-Im’s largest authorization in Vietnam consisted of a $118 million direct loan to the government of Vietnam to purchase a telecommunications satellite. In fiscal year 2013, Ex-Im authorized $16.7 million for a long-term loan to Vietnam’s National Power Transmission Corporation to purchase electricity transmission equipment. OPIC committed about $55.6 million in financing to U.S. investment projects in Vietnam.In 2014, OPIC committed to provide an investment guarantee of up to $50 million for the Mekong Renewable Resources Fund, which will invest in the environmental services and infrastructure sector, the renewable energy sector, and the energy efficiency sector in Vietnam, Cambodia, and Laos. China does not publish data on its financing for exports, imports, and investment in Vietnam by private and state-owned enterprises. However, according to information provided by the U.S. Embassy in Hanoi, China made available approximately $4.5 billion in financing from 2008 to 2013 for coal-fired power plants and for part of the Hanoi rail transit system, all constructed by Chinese firms. China’s Export-Import Bank has also published brief summaries of major projects for some countries, such as Vietnam. One such summary indicates that the bank provided a concessional loan in 2013 to support the construction of a chemical plant in Vietnam to manufacture fertilizer. In addition, China provides financing and labor in support of projects in Vietnam. According to State officials, Vietnam’s importation of Chinese labor for technical positions enhances China’s role in the Vietnam economy because the Vietnamese labor market lacks the capacity to fill midlevel technical positions. However, according to testimony before the U.S.-China Security Review Commission in May 2015, local Vietnamese have sometimes resented the importation of Chinese labor. According to State officials, such resentment contributed to the riots and violence in Vietnam after China placed the oil rig in the disputed Paracel waters. State, Commerce, and USDA maintain staff in Vietnam to provide export promotion services and policy advocacy for U.S. firms operating in Vietnam. For example: State. State’s Economic Section at the U.S. Embassy in Hanoi advocates for U.S. investors and for trade and investment policies favored by the United States, according to a senior State official. The official said that the section also supports the negotiation of U.S. trade agreements, such as TPP, and other types of economic agreements, including a United States–Vietnam agreement related to taxation. Commerce. According to Commerce officials in Vietnam, Commerce personnel based in the country assist U.S. firms by, among other things, matching them with local partners, organizing trade missions, and providing advocacy. For example, the Commerce officials said that they organized a trade mission and provided advocacy for U.S. civil nuclear firms. Another Commerce official told us that Commerce officials had worked with the Vietnamese government to remove an illegal duty on goods that a U.S. company was importing into Vietnam. USDA. USDA personnel help address market access and development issues in Vietnam for U.S. agricultural products, according to a USDA official in Vietnam. For example, according to the official, USDA personnel track Vietnamese government regulations that would affect U.S. agricultural products and provide comments to the Vietnamese government as needed. The official noted that USDA personnel also work directly with the Vietnamese government to help U.S. firms retrieve stranded cargo, particularly perishable goods, from Vietnamese customs. For instance, one firm’s product was delayed in customs because it lacked a plant quarantine certificate that is not required in the United States. The Chinese government has also acted to support Chinese firms that do business in Vietnam. For example, according China’s Ministry of Foreign Affairs, China and Vietnam have established two economic cooperation zones in Vietnam, near Ho Chi Minh City and in Haiphong City, to facilitate trade and investment by offering tax and other advantages for Chinese firms that invest in the zone. U.S. agencies have assisted Vietnam in increasing economic openness and integration and improving economic governance. In fiscal years 2009 through 2013, the U.S. agencies provided a total of $32 million in trade capacity building assistance—that is, development assistance intended to improve a country’s ability to benefit from international trade—to Vietnam. U.S. trade capacity building assistance to Vietnam has supported initiatives aimed at, among other things, modernizing Vietnam’s commercial laws and legal system, providing assistance to Vietnam relevant to its trade agreement commitments, improving the country’s customs and border control, and supporting potential U.S. investment opportunities. The majority of U.S. trade capacity building assistance to Vietnam during this period—about 64 percent—was provided by the U.S. Agency for International Development (USAID) to, for example, improve Vietnam’s regulatory environment to support economic growth and a better business and trade environment. For more information about U.S. trade capacity building assistance to Vietnam, see appendix IV. China has assisted Vietnam’s economic development through infrastructure construction as well as efforts to develop connectivity between China and Southeast Asian countries. According to the U.S. Embassy in Hanoi, China provided about $4.5 billion of approximately $10.8 billion in large infrastructure construction projects awarded to Chinese firms in Vietnam from 2008 to 2014. These infrastructure projects included power plants, processing plants, and a railway (see fig. 10). The report noted that the remaining funding for infrastructure construction was provided by Australia, ADB, and the World Bank and through joint ventures. In addition, according to the U.S. Embassy in Hanoi, as of 2014, Chinese firms had won contracts to build 15 of 24 new thermal power plants in Vietnam. In late 2013, China and Vietnam agreed to the implementation of the Shenzhen-Haiphong trade corridor to link the Vietnamese port city of Haiphong to Shenzhen in China. According to testimony before the U.S.-China Security Review Commission in May 2015, China has also announced that it will help upgrade the Haiphong port to accommodate large container ships. In addition, through the ADB-supported Greater Mekong Subregion (GMS) Economic Cooperation program, Vietnam and China are participating in a plan to connect Vietnam and other mainland Southeast Asian countries with each other and with China through a series of economic corridors that include improving transportation infrastructure. ADB’s GMS Strategic Framework identifies corridors, including an eastern corridor running north-to-south and connecting China and Vietnam; an east-west corridor connecting Burma, Thailand, Laos, and central Vietnam; and a southern corridor connecting Burma, Thailand, Cambodia, and southern Vietnam. For example, according to Chinese government reporting, the $952 million Hanoi to Lao Cai freeway, which a Chinese contractor is building, is part of the GMS strategic framework. Similarly, the Master Plan on ASEAN Connectivity envisions a rail link through Vietnam connecting the interior of China with Singapore and connecting the capital cities in Vietnam, Cambodia, and Thailand with a spur line to the capital of Laos. This rail link would complement the various transport corridors under the GMS and other existing transport networks, with the aim of creating an integrated transport network throughout Southeast Asia and Asia as a whole. The railway running from China to Ho Chi Minh City in the south of Vietnam is already complete. The Master Plan on ASEAN Connectivity also calls for a network of highways meeting certain quality standards and connecting Vietnam with all of its neighbors, including China. Vietnam has constructed its portions of the highway network. Vietnam is one of 57 prospective founding members of China’s proposed Asian Infrastructure Investment Bank, an international institution to finance infrastructure projects throughout the Asia-Pacific region. Under the bank’s initial agreement, the bank’s authorized capital is $100 billion, of which China has pledged $29.8 billion and Vietnam has pledged $663 million. Bank documents indicate that the bank anticipates beginning operations before the end of 2015. We provided a draft of this report for review and comment to the Departments of Agriculture, Commerce, State, and the Treasury and to MCC, OPIC, USAID, Ex-Im, the U.S. Trade and Development Agency, and USTR. We received technical comments from Commerce, State, Treasury, MCC, OPIC, Ex-Im, USTR, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of Agriculture, Commerce, State, and the Treasury; the Chairman of Ex-Im; the Administrator of USAID; the U.S. Trade Representative; the Director of the U.S. Trade and Development Agency; the Chief Executive Officers of OPIC and MCC; and other interested parties. In addition, the report is available at no charge on the GAO website at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. We examined available information about U.S. and Chinese trade and investment, competition, and actions to further economic engagement in Indonesia and Vietnam. This report is a public version of a sensitive but unclassified report that we are issuing concurrently. This report addresses the same objectives, and employs the same methodology, as the sensitive report. We conducted fieldwork in Jakarta, Indonesia, and in Hanoi and Ho Chi Minh City, Vietnam. We based our selection of these two countries, among the 10 members of the Association of Southeast Asian Nations (ASEAN), on the amounts of U.S. and Chinese exports to, and imports from, each country; foreign direct investment (FDI) in each country; and development assistance in each country. We also considered whether (1) a country participated in U.S. and Chinese trade agreements or was a negotiating partner in the Trans-Pacific Partnership, (2) any regional institutions were located in the country, (3) the country was an emerging partner based on gross domestic product, and (4) the country was a South China Sea claimant. To describe U.S. and Chinese trade and investment in Indonesia and Vietnam, we analyzed data on U.S. and Chinese trade in goods, trade in services, and FDI. To assess the reliability of these data, we cross- checked the data on trade in goods and FDI for internal consistency, and consulted with U.S. officials on the data on trade in goods and the U.S. data on trade in services and FDI. Because of the limited availability of data and the differing contexts for the data sets we report, the time period for each of these data sets varied. We determined that the data were sufficiently reliable for the purposes of our report and have noted caveats, where appropriate, to indicate limitations in the data. To obtain data on U.S. and Chinese trade in goods from 1994 through 2014, we accessed the United Nations’ Commodity Trade Statistics (UN Comtrade) database through the U.S. Department of Commerce’s (Commerce) Trade Policy Information System. The UN Comtrade database provides data for comparable categories of exports and imports of goods for the United States and China. Because, according to a Commerce official, the goods exports data that China reports to the UN Comtrade database do not distinguish total exports from re-exports (i.e., goods that are first imported and then exported in substantially the same condition), we used data on total goods exports, which include re-exports, to ensure the comparability of U.S. and Chinese data on goods exports. The data on goods exports from the UN Comtrade database show the free-on-board prices of the goods, which exclude transportation and insurance charges. For imports, we used data on general imports, which include goods that clear customs as well as goods that enter bonded warehouses or foreign trade zones. The data on goods imports show the prices paid for the goods, including the cost of freight and insurance. We determined that the UN Comtrade data on trade in goods for the United States and China were generally reliable for comparing trends over time and the composition of trade. To categorize the goods traded by the United States and China, we assigned each good recorded in the UN Comtrade database to one of the UN’s three Broad Economic Categories—capital, intermediate, or consumer. For goods that the UN does not classify as capital, intermediate, or consumer, we created an unclassified category. For example, the UN does not classify passenger motorcars as capital or consumer goods. To examine each country’s trade in goods with its trading partners over time, we analyzed data from the ASEANstats database for 2003, 2008, and 2013 for Indonesia and 2004, 2008, and 2013 for Vietnam. Because some of Indonesia’s and Vietnam’s trading partners do not report data to the UN Comtrade database, we used data from the ASEANstats database as a comprehensive set of data on trade in goods for all of Indonesia’s and Vietnam’s trading partners. We compared trade data from the ASEANstats and the UN Comtrade databases and found some differences in values of bilateral trade between Indonesia and Vietnam and their trading partners. Reasons for the differences include differences in the valuation of goods, differences in data quality, and the omission of some Indonesia and Vietnam trading partners from UN Comtrade data. We determined that the data from the ASEANstats database for Indonesia and Vietnam were generally reliable for comparing each country’s trade in goods with its trading partners over time. We determined that the data from the ASEANstats database for Indonesia and Vietnam were generally reliable for comparing each country’s trade in goods with its trading partners over time. To illustrate the importance of accounting of a country’s exports that originate in other countries, we analyzed data from the Organisation for Economic Co-operation and Development (OECD) and the World Trade Organization (WTO) on trade in value-added goods and services. For U.S. trade in services with Indonesia, we used publicly available data from Commerce’s Bureau of Economic Analysis (BEA). BEA’s data on trade in services with Vietnam for several categories—travel and passenger fares, transportation, education, and “other” private services— are based on data from various sources. According to BEA, its survey data are from mandatory surveys of primarily U.S. businesses with services trade that exceeds certain thresholds. BEA does not survey a random sample of U.S. businesses and therefore does not report the data with margins of error. We calculated the value of U.S. trade in services with Vietnam for 2012 based on tabulations prepared for us by BEA and other sources, including the U.S. Census Bureau. Our estimates of U.S. trade in services with Vietnam represent broad estimates rather than precise values. We extrapolated values for certain services at the country level from broader data (e.g., we calculated values for travel services by multiplying the number of travelers for Vietnam by the average traveler expenditure for the region). We calculated values for other services (e.g., business, professional, and technical services) from a range of estimates based on survey data. When the volume of trade for a service was presented as a range, we used the midpoint value to estimate the volume of trade for that service. When the volume of trade for a service was presented as a range and described by BEA as trending upward, we used the lowest value for the earlier years and the highest value for the later years. For data on U.S. firms’ investments in Indonesia and Vietnam from 2007 through 2012, we used data that we obtained directly from BEA. For Chinese firms’ investments, we used data from the UN Conference on Trade and Development as reported by China’s Ministry of Commerce. To identify patterns in, and to compare, U.S. and Chinese FDI, we used U.S. and Chinese data on FDI and noted in our report the following limitations. As we have previously reported, both U.S. and Chinese FDI may be underreported, and experts have expressed particular concern regarding China’s data. U.S. and Chinese firms set up subsidiaries in places such as the Netherlands and the British Virgin Islands, which can be used to make investments that are not captured by U.S. and Chinese data on FDI. Experts state that this could be a significant source of underreporting of China’s FDI. According to BEA, data on U.S. FDI are based on quarterly, annual, and benchmark surveys. BEA’s benchmark survey is the most comprehensive survey of such investment and covers the universe of U.S. FDI. BEA notes that its quarterly and annual surveys cover samples of businesses with FDI that exceed certain thresholds. Because BEA does not survey a random sample of businesses, and therefore does not report the data with margins of error, our report does not include margins of error for BEA data. China does not provide a definition of FDI when reporting FDI data. However, the types of data included in Chinese FDI data (e.g., equity investment data and reinvested earnings data) appear similar to data reported for U.S. FDI, for which the United States uses OECD’s definition. Despite the limitations of China’s FDI data, various reports, including those published by international organizations such as the International Monetary Fund (IMF), government agencies, academic experts, and other research institutions, use China’s reported investment data to describe China’s FDI activities. In addition, despite some potential underreporting of FDI data, we determined that the FDI data were reliable for reporting general patterns when limitations are noted. Because of challenges in determining appropriate deflators for some data, we used nominal rather than inflation-adjusted values for U.S. and Chinese trade and investments in Indonesia and Vietnam. However, we first tested the impact of deflating these values and found a limited impact for descriptions of the overall trends. For example, using the U.S. gross domestic product deflator to remove inflation in the goods trade values included in this report would cause total Chinese trade in goods with Indonesia to surpass total U.S. trade in goods in 2005, similar to trends shown for nominal trade values. U.S. total trade in goods in Indonesia increased by a factor of 2.8 from 1994 through 2014 if not adjusted for inflation and by a factor of 1.9 if adjusted for inflation. Over the same period, Chinese total trade in goods increased by a factor of 24.1 in Indonesia if not adjusted for inflation and by a factor of 16.3 if adjusted for inflation. To assess the extent of competition between exporters from the United States, China, and other countries, we calculated an export similarity index to compare U.S., Chinese, and other countries’ exports to Indonesia and Vietnam in 2006 through 2014. The export similarity index is a measure of the similarity of exports from two countries to a third country. For example, to calculate the index for U.S. and Chinese exports to Indonesia and Vietnam, we first calculated, for each type of good that the United States and China exports, the share of that good in the United States’ and China’s total exports to Indonesia and Vietnam. We then took the minimum of the United States’ and China’s shares. The index is the sum of the minimum shares for all types of goods that the United States and China export to Indonesia and Vietnam. We used data on goods exports from the UN Commodity Trade database at the four-digit level and calculated each country’s export of a particular good as a share of that country’s total exports to Indonesia and Vietnam. We also analyzed data from Commerce’s Advocacy Center on host- government contracts and data for contracts funded by the Asian Development Bank (ADB) and World Bank. Although these data represent a small share of activity in Indonesia and Vietnam, they provide insights into the degree of competition between U.S. and Chinese firms for the projects represented. Commerce’s Advocacy Center data comprised cases where U.S. firms requested the agency’s assistance in bidding for host- government contracts in either Indonesia or Vietnam from 2009 through 2014. Because these data included the nationality of other firms bidding on a host-government contract, we used this information to determine the extent to which Chinese firms or firms of other nations were competing with U.S. firms for these contracts. We counted the numbers of contracts and summed the value of contracts for which each foreign country’s firms competed against U.S. firms. For Vietnam, we excluded five contracts for which the nationalities of competitors were not identified. In cases where foreign competitors comprised a consortium of firms from different countries, we counted the whole value of the contract in each competing nationality’s total. We also used the Advocacy Center’s classification of contracts by sector to determine the sectors in which Chinese firms competed for the highest proportion of contracts. To determine the reliability of these data, we manually checked the data for missing values and also reviewed information about the data’s collection. In addition, we interviewed Advocacy Center staff about the data. Advocacy Center staff told us that data from before 2010, when the center began using a new database, may be incomplete because data for some contracts that were closed before 2010 may not have been transferred to the new database. Overall, we found the Advocacy Center data to be reliable for reporting on competition between U.S. and other firms, including Chinese firms, in Indonesia and Vietnam. The World Bank publishes data on the value, sector, and suppliers of its contracts in Indonesia and Vietnam. We used the World Bank’s classification of contracts into procurement categories (goods, civil works, consultant services, and nonconsultant services) to compare the value and types of contracts that U.S. and Chinese firms won from 2001 through 2014. However, we combined the consultant services and nonconsultant services categories into one category, “consultant and other services.” The World Bank data include contracts that were reviewed by World Bank staff before they were awarded. To determine the reliability of these data, we electronically checked the data for missing values and possible errors. We also contacted World Bank personnel to learn how the data were collected and identify any limitations of the data. We found that the data for contracts funded by the World Bank were generally reliable for the purpose of demonstrating U.S. and Chinese competition in Indonesia and Vietnam over time. We used ADB’s published data on the value, sector, and recipient of its contracts for consulting services, goods, and civil works provided as technical assistance or funded by loans and grants to Indonesia and Vietnam in 2013 and 2014 to compare the value and types of contracts won by U.S. and Chinese firms. ADB only publishes data for consulting contracts over $0.1 million in value and other contracts over $1.0 million, so our analysis of ADB contracts does not include some smaller ADB contracts. In addition, a portion of the ADB data did not have the contracts classified according to the nature of the contract (construction, consulting services, goods, turnkey, and others). Therefore, we classified contracts won by U.S. and Chinese firms that were missing these categories according to those used in the rest of the data. To determine the reliability of these data, we checked the data for missing values and other types of discrepancies. We found that the ADB data were generally reliable for our purpose of reporting on U.S. and Chinese competition in Indonesia and Vietnam in 2013 and 2014. To identify the challenges that U.S. firms face when conducting business in Indonesia and Vietnam, we reviewed the Office of the United States Trade Representative’s (USTR) 2014 and 2015 National Trade Estimate Reports on Foreign Trade Barriers and its 2015 Special 301 Report on intellectual property rights protections. We reviewed the U.S. Department of Agriculture’s (USDA) country strategies for Indonesia and Vietnam, Department of State (State) cables, and Commerce and State’s 2014 reports on doing business in Indonesia and Vietnam. We also interviewed representatives of 12 U.S. firms in Indonesia and Vietnam, in sectors such as agriculture and manufacturing, as well as representatives of five private sector and research organizations, such as the American Chamber of Commerce-Vietnam and the Center for Strategic and International Studies. The views expressed in these interviews are not generalizable. To examine the actions that the U.S. and Chinese governments have taken to further economic engagement in Indonesia and Vietnam, we reviewed regional and country studies and U.S. and Chinese agency documents and interviewed U.S. and third-country officials, officials from private sector business associations, and experts from research institutes. We tried to arrange visits with Chinese government officials in Indonesia and Vietnam and in Washington, D.C.; however, they were unable to accommodate our requests for a meeting. U.S. agencies included in the scope of our study were USDA, Commerce, State, the Department of the Treasury, USTR, the Millennium Challenge Corporation, the U.S. Agency for International Development (USAID), the Export-Import Bank of the United States (Ex-Im), the Overseas Private Investment Corporation (OPIC), and the U.S. Trade and Development Agency. To obtain information about U.S. and Chinese trade agreements with Indonesia and Vietnam, we reviewed the trade agreements; U.S. and Chinese government documents; studies from research institutions; prior GAO reports; and documents from multilateral organizations, such as WTO. We identified studies assessing the effect of the China- ASEAN free trade agreement on China’s, Indonesia’s, and Vietnam’s economies by searching the ProQuest database (which includes the EconLit database) and the studies of international organizations such as ADB, and we selected and reviewed studies that estimated the impact of the agreement on these three economies. We also interviewed U.S. officials in Indonesia and Vietnam, officials from private sector business associations, and experts from research institutes. To calculate the percentage of Indonesia’s and Vietnam’s total goods trade represented by their trade with the participants in the Regional Comprehensive Economic Partnership Agreement, we used data on trade in goods from the ASEANstats database. To determine the reliability of these data, we compared trade data from the ASEANstats and the UN Comtrade databases and found some differences in values of bilateral trade between ASEAN countries and their trading partners. Reasons for the differences include differences in the valuation of goods, differences in data quality, and the omission of some ASEAN trading partners from UN Comtrade data. We determined that the data from the ASEANstats database for Indonesia and Vietnam were generally reliable for comparing each country’s trade in goods with its trading partners. To obtain information about U.S. financing in Indonesia and Vietnam, we compiled Ex-Im and OPIC data from these agencies’ annual reports and congressional budget justifications and interviewed agency officials to provide additional context and to clarify elements of the data. Where relevant, we note that additional Ex-Im insurance may include Indonesia and Vietnam but do not include these data in our totals. To determine the reliability of these data, we interviewed agency officials and checked their published annual reports against agency-provided summary data to determine any limitations or discrepancies in the data. We determined that data from Ex-Im and OPIC were generally reliable for presenting trends and aggregate amounts by year. To document U.S. efforts to provide export promotion services in Indonesia and Vietnam, we reviewed information on State’s Economic Sections at the U.S. Embassy in Indonesia and Vietnam and interviewed State, Commerce, and USDA officials in Washington, D.C., and in Vietnam and Indonesia. To describe Chinese financing in Indonesia and Vietnam, we used information reported by State and China’s Export-Import Bank. We also interviewed private sector and research institute representatives. To document Chinese support for firms in Indonesia and Vietnam, we used publicly available information from a variety of sources, including Chinese embassy websites; the Bank of Indonesia’s website; China’s Ministry of Commerce; and Xinhua, China’s state press agency. To document U.S. support for economic development and integration in Indonesia and Vietnam, we used the USAID trade capacity building database to capture U.S. development assistance efforts related to trade in Indonesia and Vietnam. USAID collects data to identify and quantify the U.S. government’s trade capacity building activities in developing countries through an annual survey of agencies on behalf of USTR. We also reviewed agency project summaries and interviewed agency officials in Washington, D.C., and in Indonesia and Vietnam. To determine the reliability of these data, we interviewed agency officials regarding their methods for compiling and reviewing the data. We determined that data from USAID’s trade capacity building database were sufficiently reliable for our purposes. To describe China’s support for regional integration in Indonesia, we assessed public statements from Chinese and Indonesian officials and information reported by U.S. agencies, including State, and we interviewed U.S. and Indonesian officials. To describe China’s support for regional integration in Vietnam, we assessed information reported by U.S. agencies, including State and USAID, and interviewed U.S. and Vietnamese officials. We also reviewed publicly available information on the Asian Infrastructure Investment Bank’s website. We conducted this performance audit from April 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From 2000 through 2014, the composition of U.S. and Chinese trade in goods with Indonesia, in terms of value, remained relatively stable except for a significant increase in China’s mineral imports (see figs. 11 and 12). Textiles represented the largest share of U.S. imports from Indonesia since 2005. China’s mineral imports increased from 25 percent of its total imports from Indonesia in 2000 to a peak of 58 percent in 2013 before declining to 42 percent in 2014. Animals, plants, and food generally represented the largest share of U.S. exports to Indonesia from 2005 through 2014, and machinery represented the largest share of Chinese exports to Indonesia from 2000 through 2014. In 2014, almost half of the United States’ and most of China’s goods trade with Indonesia consisted of goods for industrial use, most of which are intermediate goods (see fig. 13). Among the industrial goods that the United States traded with Indonesia, rubber was the top U.S. industrial import and cotton was the top U.S. industrial export in 2014. Among the industrial goods that China traded with Indonesia in 2014, coal was the top Chinese industrial import and phones for cellular and other networks were the top Chinese industrial export. In 2014, the United States exported $1.9 billion of civilian aircraft, aircraft engines, and aircraft parts—the overall top U.S. export to Indonesia, which represents 23 percent of U.S. exports to Indonesia and includes capital, intermediate, and consumer goods. From 2000 through 2014, the composition of U.S. and Chinese trade in goods with Vietnam generally shifted, in terms of value, from predominantly raw commodities to manufactured goods (see figs. 14 and 15). In 2000, the largest share of U.S. imports from Vietnam consisted of animals, plants, and food, while the largest share of Chinese imports from Vietnam consisted of minerals. However, by 2014, the largest share of U.S. imports from Vietnam consisted of textiles, which rose from 6 percent of U.S. imports in 2000 to 31 percent in 2014, while the largest share of Chinese imports consisted of machinery, which rose from 1 percent in 2000 to 47 percent in 2014. From 2000 through 2014, animals, plants, and food grew to represent the largest share of U.S. exports to Vietnam, while machinery grew to represent the largest share of Chinese exports to Vietnam. In 2014, the majority of U.S. imports from Vietnam consisted of goods for consumer use, while the majority of U.S. exports to Vietnam—as well as Chinese imports from, and exports to, Vietnam—consisted of goods for industrial use (see fig. 16). Among the consumer goods that the United States and China traded with Vietnam, wooden bedroom furniture was the top U.S. import and nuts were the top U.S. export, while cameras were the top Chinese import and women’s and girl’s cotton jackets and blazers were the top Chinese export. Among the industrial goods that the United States and China traded with Vietnam, portable digital automatic data processing machines were the top U.S. import and cotton was the top U.S. export, while microchips were the top Chinese import and phone- set parts were the top Chinese export. U.S. agencies have identified certain official development assistance to Indonesia and Vietnam as trade capacity building assistance. This assistance addresses, for example, the countries’ regulatory environment for business, trade, and investment; constraints such as low capacity for production and entrepreneurship; and inadequate physical infrastructure, such as poor transport and storage facilities. In fiscal years 2009 through 2013, U.S. agencies provided about $373 million in trade capacity building assistance to Indonesia (see table 5). As table 5 shows, three agencies—the Millennium Challenge Corporation (MCC), the U.S. Agency for International Development (USAID), and the Department of Labor (Labor)—provided the largest amounts of U.S. trade capacity building assistance to Indonesia in fiscal years 2009 through 2013. MCC provided about $333 million—about 90 percent of U.S. trade capacity building assistance to Indonesia during this period—as part of a 5-year, $600 million compact with Indonesia. One of the compact’s three projects, the Green Prosperity Project, provides technical and financial assistance for projects in renewable energy and natural resource management to help raise rural household incomes. A second project, the Procurement Modernization Project, is designed to help the government of Indonesia develop a more efficient and effective process for the procurement of goods and services. MCC obligates compact funds when a compact enters into force, disbursing the funds over the 5 years of the compact. As of March 2015, MCC had expended $2.3 million of its $333 million commitment for the Green Prosperity Project and $6 million of its $50 million commitment for the Procurement Modernization Project. USAID provided about $19 million in trade capacity building assistance, among other things, to provide economic policy advisory services to the Indonesian government; strengthen key trade and investment institutions by contributing to a World Bank Fund; and strengthen the Indonesian Ministry of Trade’s capacity to analyze, negotiate, and implement bilateral and multilateral trade agreements. In addition, USAID officials told us that they are working to build and sustain a culture of accountability in Indonesia at the national and subnational levels by, for example, working with the U.S. Department of Justice to train investigators to support Indonesia’s Corruption Eradication Commission. However, according to agency officials, after consultations with Indonesian officials and others knowledgeable about the Indonesian economy, USAID stopped providing direct support for economic and trade policy issues. USAID officials also said that the Indonesian government did not view the support as a priority. Labor provided about $11 million in trade capacity building assistance to improve Indonesia’s compliance with labor standards and its competitiveness in global supply chains, to combat child labor, and to build the capacity of domestic labor organizations. In fiscal years 2009 through 2013, U.S. agencies provided about $32 million in trade capacity building assistance to Vietnam (see table 6). As table 6 shows, four agencies—USAID, the Departments of the Treasury (Treasury) and State (State), and the U.S. Trade and Development Agency (USTDA)—provided the majority of U.S. trade capacity building assistance to Vietnam in fiscal year 2009 through 2013. USAID provided approximately $20.4 million—about 64 percent of U.S. trade capacity building assistance to Vietnam during this period—to enhance the country’s economic governance. From 2001 through 2010, USAID’s Support for Trade Acceleration projects sought to modernize Vietnam’s commercial laws and legal system to help the country meet its bilateral trade agreement commitments and prepare it to join the World Trade Organization. In addition, the Vietnam Competitiveness Initiative, which began in 2003 and ended in 2013, sought to strengthen Vietnam’s regulatory system and regulatory framework and models for infrastructure development. The Provincial Competitiveness Index, which began in 2013 and is scheduled to end in 2016, assesses and reports on barriers to economic development and doing business in Vietnam. Moreover, USAID’s Governance for Inclusive Growth project—which began in 2013 and is scheduled to end in 2018—seeks to provide assistance relevant to Vietnam’s Trans-Pacific Partnership commitments, among other things. Finally, the Lower Mekong Initiative, encompassing Thailand, Cambodia, Laos, Burma, and Vietnam, supports, among many development efforts, reduction of the development gap between the more economically developed Association of Southeast Asian Nations countries and less developed countries, such as Vietnam, and also supports regional efforts toward economic integration. Treasury provided, through its Office of Technical Assistance, about $5.9 million in trade capacity assistance for several projects to improve Vietnam’s government operations. For example, OTA is currently assisting Vietnam with implementation of International Public Sector Accounting Standards. Previously, OTA provided assistance in the areas of banking supervision, strengthening of tax administration, and debt management. State provided about $2.6 million in trade capacity assistance, primarily for improving Vietnam’s customs and border control. State’s Export and Border Security Assistance program promotes border security and customs operations by providing training, equipment, vehicles, spare parts, infrastructure, travel to workshops and conferences, translations of key documents such as control lists, and other exchanges. State has provided equipment and training to Vietnamese officials in support of these efforts. USTDA provided about $2 million in U.S. trade capacity building assistance for projects to support potential U.S. investment opportunities. In 2014, USTDA provided $900,000 for a feasibility study—the largest USTDA-funded project in Vietnam that year—for an integrated telecommunications control center for the Ho Chi Minh City urban rail system. In August 2014, Vietnam became the second country to sign a memorandum of understanding with USTDA, under which USTDA will provide training and technical assistance to public procurement officials to implement Vietnam’s revised procurement law. In July 2015, USTDA signed two additional grant agreements with Vietnam for (1) technical assistance and training in support of Vietnam’s efforts to meet civil aviation safety standards and (2) a feasibility study to support the efforts of a Vietnamese private firm to develop an offshore wind power project. In addition to the contact named above, Emil Friberg (Assistant Director), Charles Culverwell, Fang He, Kira Self, Michael Simon, and Eddie W. Uyekawa made key contributions to this report. Benjamin A. Bolitzer, Lynn A. Cothern, Mark B. Dowling, Justin Fisher, Michael E. Hoffman, Reid Lowe, and Oziel A. Trevino provided technical assistance.
|
The United States and China have each sought to increase their economic engagement in Southeast Asia. U.S. agencies have identified Indonesia and Vietnam as important emerging U.S. partners that contribute to regional stability and prosperity. Indonesia has the world's 10th largest economy in terms of purchasing power, and Vietnam is one of the most dynamic economies in East Asia. Both the United States and China have established comprehensive partnerships with each country that are designed to enhance their bilateral cooperation in key areas. GAO was asked to examine the United States' and China's economic engagement in Southeast Asia. GAO issued a report on 10 Southeast Asian countries in August 2015. In this report, GAO presents case studies for two of these countries, Indonesia and Vietnam, providing greater detail about the United States' and China's trade and investment, competition, and actions to further economic engagement in the two countries. GAO analyzed publicly available economic data and documentation from 10 U.S. agencies and the Chinese government. The data that GAO reports have varying time periods because of the data sets' limited availability and differing contexts. GAO interviewed U.S., Indonesian, and Vietnamese officials and private sector representatives. This is the public version of a sensitive but unclassified report that is being issued concurrently. GAO is not making any recommendations in this report. Indonesia. In 2014, China's imports from, and exports to, Indonesia exceeded the United States' (see figure). The United States and China compete more often with other countries than with each other in goods exported to Indonesia and win contracts in different sectors. In contrast to the United States, which is not involved in a free trade agreement (FTA) with Indonesia, China is a party to a regional FTA that includes Indonesia and is negotiating the Regional Comprehensive Economic Partnership (RCEP) with Indonesia and 14 other countries. In fiscal years 2009 through 2014, U.S. agencies' financing for exports to, and investment in, Indonesia totaled about $2.5 billion, compared with at least $34 billion in Chinese financing, according to the Department of State. In 2007 through 2012, U.S. foreign direct investment (FDI) of $9.6 billion exceeded China's reported $2.7 billion, according to available data. Vietnam. In 2014, U.S. imports from Vietnam exceeded China's, while Chinese exports to Vietnam exceeded U.S. exports (see figure). As in Indonesia, the United States and China compete more often with other countries than with each other in goods exported to Vietnam and win contracts in different sectors. The United States and Vietnam are both participants in the proposed regional Trans-Pacific Partnership, while China and Vietnam are both parties to a regional FTA and the RCEP negotiations. In fiscal years 2009 through 2014, U.S. agencies' financing for exports to, and investment in, Vietnam totaled about $205 million, compared with at least $4.5 billion in Chinese financing, according to the Department of State. In 2007 through 2012, China's reported FDI of $1.2 billion was more than twice the United States' reported FDI of $472 million, according to available data.
|
The U.S. government maintains more than 260 diplomatic posts—including embassies, consulates, and other diplomatic offices—in about 180 countries worldwide. In addition, according to various estimates, there are over 66,000 personnel overseas, including both U.S. direct hires and locally employed staff under chief of mission authority, representing more than 30 agencies and government entities. Agencies represented overseas include, among others, the Departments of Agriculture, Commerce, the Treasury, Defense, Homeland Security, Justice, and State and USAID. According to the Office of Management and Budget, the average cost across all agencies of having one U.S. direct hire overseas for 2007 is $491,000, including direct and indirect personnel costs as well as support costs such as security, office leases and furnishings, and field travel. According to State’s Bureau of Resource Management, State’s average cost of having one U.S. direct hire overseas for 2007 is approximately $400,000. The White House, Congress, the Office of Management and Budget, and our own agency have emphasized rightsizing as a key initiative to ensuring that the overseas presence is at an optimal and efficient level to carry out foreign policy priorities. The President’s Management Agenda has identified rightsizing as one of the administration’s priorities. The agenda stipulates that all agencies with an overseas presence should integrate rightsizing into their workforce plans and reconfigure overseas staff allocations to the minimum amount necessary to meet U.S. foreign policy goals. Figure 1 illustrates the various levels of involvement in U.S. government overseas rightsizing. In fiscal year 2004, Congress mandated the establishment of the Office of Rightsizing within State. The office was directed to lead State’s effort to develop internal and interagency mechanisms to better coordinate, rationalize, and manage the deployment of U.S. government personnel overseas, under chief of mission authority. The Office of Rightsizing reviews and approves rightsizing reports for all capital construction projects for new embassy compounds or facilities, as well as the staffing composition of 20 percent of all U.S. missions annually, so that each mission is reviewed once every 5 years. According to the Office of Rightsizing, without an approved rightsizing report based on these reviews, the Office of Management and Budget will not forward to Congress a programming notification for construction. In addition to the Office of Rightsizing, a number of entities within State at the Washington, D.C. and post levels are involved in initiatives and efforts related to rightsizing. State’s regional bureaus are involved with posts’ rightsizing reviews as well as the administration of regional service centers. In addition, State’s Bureau of Overseas Buildings Operations is responsible for the worldwide overseas buildings program for State and the U.S. government community serving abroad under chief of mission authority. The Bureau of Overseas Buildings Operations is directing an expanded new embassy construction program to provide safe, secure, and functional work places for the diplomatic and consular missions overseas. At the post level, the Chief of Mission is responsible for the security and safety of every U.S. government and foreign national employee at the mission. The precise structure of a mission is determined by the Chief of Mission through the National Security Decision Directive 38 (NSDD-38) process, which provides authority for the Chief of Mission to determine the size, composition, or mandate of personnel operating at the mission. See figure 1 for a depiction of the Chief of Mission’s involvement in the rightsizing process. The operation of embassies and consulates overseas requires basic administrative support services for overseas personnel. The management section, which is normally headed by a management counselor or officer, is the section responsible for overseeing the administrative functions at a post and generally serves as the recipient of requests from Washington, D.C., pertaining to staffing and rightsizing. Administrative support services at posts are generally provided through the State-managed International Cooperative Administrative Support Services (ICASS) system, which provides more than 30 services—including financial management vouchering, human resources, travel services, housing, vehicle maintenance, and motor pool—with costs of the services divided among the agencies and sub-agencies with staff at the post, based on the level of ICASS services used. To address the security and other deficiencies of overseas embassies, consulates, and other buildings, Congress established an interagency Capital Security Cost Sharing Program to generate almost $18 billion over a 14-year period to accelerate the construction of approximately 150 new, secure, and functional embassy and consular compounds. The main objectives of the program are to generate funds for new embassy compound construction and to encourage State and other agencies to rightsize their staff by requiring that all agencies with an overseas presence bear some of the costs for building construction. Capital security cost sharing is based on the total number of existing or authorized positions that an agency has overseas in U.S.-government-owned or leased facilities, as well as any projected staff growth positions. Cost sharing is also based on the type of space occupied by post personnel. Charges are being phased- in over 5 years with the fiscal year 2005 per capita charges being 20 percent of the fully phased-in amount and fiscal year 2009 per capita charges representing the full amount. Table 1 illustrates the fully phased-in per person charges for Capital Security Cost Sharing. Almost 5 years into the President’s Management Initiative on rightsizing, the U.S. government does not yet have accurate data on the size and composition of the U.S. overseas presence at embassies and consulates; however, State is working on a unified database which, if periodically updated by posts, should provide an accurate depiction of the overseas presence. Several agencies have undertaken efforts to examine and adjust their staffing configurations, which have been driven by various factors, such as congressional mandates; the rising costs of building new, secure embassy buildings; and other shared costs. According to State officials, there is not yet an accurate picture of the size and composition of the U.S. overseas presence at embassies and consulates. In order to be able to make informed staffing and rightsizing decisions, and conduct accurate analysis of overseas staffing changes, it is important that the U.S. government have an accurate account of all overseas positions under chief of mission authority. Moreover, accurate staffing data is needed to assess each agency a fair share of the cost of the embassy construction program. Depending on the data’s source and time, estimates of the total overseas presence under chief of mission authority run from 66,000 to about 69,000 American and non-American personnel, such as locally employed staff, from more than 30 agencies. In addition, State estimated that there are approximately 78,000 U.S. government positions overseas, as of December 2005. Some of these positions have been eliminated or are in the process of being eliminated or reconciled, according to an Office of Rightsizing document. Further, according to State’s estimates, there are almost 22,000 direct-hire American positions overseas. Figure 2 provides the estimated percentage breakdown of the total American positions overseas under chief of mission authority by the key agencies, according to the Office of Rightsizing. According to State officials, State has faced various difficulties in developing accurate data. For example, overseas staffing information is recorded in a number of different databases at a post, depending on the purpose of the information. In addition to the databases from Washington, D.C., posts have created their own databases, resulting in information not being uniform from post to post. Moreover, a State official reported that some agencies have failed to provide human resources data to individual posts. State officials added that changes made in one database do not automatically populate others. Therefore, posts need to make changes in a number of databases in order to ensure a full updating and, in particular, to ensure the new information will show up accurately at State headquarters. As a result, State’s numbers might include positions that have been eliminated or are vacant; or some positions might be entered more than once. Officials in the Office of Rightsizing also said that there are large numbers of employment categories overseas and that each agency might categorize its personnel overseas in a different manner or use different methodologies. They pointed out that, in some cases, agencies hire and count employed family members as U.S. direct hires while, in other instances, employed family members are counted as locally employed staff. They further explained that State’s current databases have not kept pace with changes in nomenclature. For example, although new agencies and components were established as part of the formation of the Department of Homeland Security, overseas positions might still appear in databases, for example, as part of the old Immigration and Naturalization Services. The incorrect reporting of staff positions could create problems for agencies, such as being incorrectly charged for positions under Capital Security Cost Sharing, according to non-State officials. Since 2004, State has required posts to utilize the Post Personnel database to account for overseas staffing positions for both American personnel and locally employed staff, according to the Office of Rightsizing. The Executive Director of the Bureau of Human Resources told us that, if used properly, this database has the potential to provide the staffing data that could lead to accurate overseas staffing numbers. However, officials in the Office of Rightsizing stated that the database is only as complete as the information that the posts enter into it, and that some posts might not fully understand how to use the software properly. The Executive Director of the Bureau of Human Resources indicated that his bureau has been developing training modules to educate post officials on using the Post Personnel database correctly. State’s Office of Rightsizing has been working with the Bureau of Human Resources to develop an improved database that will enable State to present a more accurate picture of all personnel and agencies assigned overseas under chief of mission authority. According to the Office of Rightsizing, the improved database will result in one complete and accurate database of all U.S. government agencies overseas and will eliminate the need for multiple requests to posts to update staffing data. State has been working toward making the Post Personnel database application the exclusive database for the entry of all staffing information at posts. In addition, State has been devising standardized organizational codes for post personnel for all agencies overseas. According to State officials, the Post Personnel database will be linked to other existing applications and will populate a number of other databases, including Post Profiles, ICASS, and the Bureau of Overseas Buildings Operations database for Capital Security Cost Sharing, thus eliminating potential errors associated with duplicative and incorrect entries. The Office of Rightsizing reported that the data in Post Personnel will soon be made available to agencies with an overseas presence so that they can verify it regularly to ensure consistency. Non-State agency officials expressed the need for transparency in overseas staffing data since they contended that often the data inputted at posts was not verified by them. The Executive Director of the Bureau of Human Resources expressed concern that there needs to be some type of accountability mechanism in place that ensures posts are inputting information regularly and accurately into Post Personnel. He added that without a mechanism in place, the quality and validity of the data will be in question. State officials said that the integrated staffing database is scheduled to be completed and operational by fall 2006. However, State has not provided guidance to posts that ensures staffing information in the Post Personnel database will be continuously updated or that outlines the accountability mechanisms for ensuring that staffing information is complete, according to a State official. Over the past several years agencies with an overseas presence have undertaken initiatives to assess their overseas staffing configurations. Several agencies reported that they have added staff overseas as a result of new mission requirements, and others reported that they have relocated or reduced their personnel to better meet mission needs and respond to the rightsizing efforts. Many agency officials with whom we spoke indicated that they have conducted comprehensive internal reviews or hired consultants to assess their overseas programs and workload. For example, the Department of Agriculture recently completed a global rightsizing review and found that the department is overstaffed in some countries and understaffed in others. As a result, the Foreign Agricultural Service is repositioning its personnel to better accomplish organizational goals. In addition, Department of Homeland Security officials indicated that they have been working to fully assess their overseas presence and to identify redundancies within the various Department of Homeland Security components overseas. State also recently announced plans for the global repositioning of its overseas presence, which entails moving hundreds of positions across the world—primarily from Washington and Europe to critical missions in regions such as Africa, South and East Asia, and the Middle East. Some agency officials said that their decisions on the numbers of overseas staff needed are guided by a number of factors, including congressional mandates, mission requirements, and budget constraints, including Capital Security Cost Sharing and ICASS costs. Officials with whom we spoke with at several agencies said that they have increased staff overseas as a result of the global war on terror, border security activities, and efforts to combat drug trafficking and weapons of mass destruction. For example, officials in the Department of Justice said that the department’s various components have increased their overseas presence due to these factors, and, in fact, U.S. Drug Enforcement Administration officials reported that they increased their presence by almost 10 percent between 2000 and 2005. In addition, Customs and Border Protection officials told us that they have been increasing their personnel overseas since 2002 due to requirements for the Container Security Initiative program. While a number of agencies have been increasing their presence overseas, a few agencies have decreased or are projected to maintain the current level of their overseas presence due to cuts in program budgets. For example, the Foreign Commercial Service reported that it reduced its overseas staffing levels by approximately 13 percent since the beginning of fiscal year 2001. In addition, Foreign Agricultural Services officials said that their overseas presence is likely to remain static. A number of agencies we spoke with cited that costs of overseas operations, which include increasing Capital Security Cost Sharing program and ICASS costs, have caused them to examine their overseas presence. Department of Commerce officials reported that, while Capital Security Cost Sharing costs represented 7 percent of their overseas costs in fiscal year 2006, by fiscal year 2009 the cost is expected to be 21 percent of the agency’s overseas costs. In addition, agency officials said that, because the Capital Security Cost Sharing costs are based on every existing or planned authorized overseas position, regardless of whether the position is filled or not, agencies have effectively been encouraged to eliminate vacant positions or keep their projected numbers low. For example, Department of Commerce officials said that costs associated with Capital Security Cost Sharing has forced the agency to keep its projected overseas numbers low rather than develop more realistic projections since program costs are based in part on projected as well as existing staffing numbers, and they do not want to estimate and pay for positions that might not be needed in the future. Some agency officials also indicated that rising ICASS costs have affected their budgets and caused them to reevaluate their overseas presence. For example, Department of Homeland Security officials said that because of the high costs of using ICASS, they are currently reevaluating their use of the services and stressed the importance of having flexibility to opt out of ICASS services. Some agency officials with whom we spoke raised several concerns about the impacts of Capital Security Cost Sharing and ICASS costs on staffing configurations. For Capital Security Cost Sharing, officials expressed concern that it is difficult for them to accurately project their overseas staffing numbers, since potential unforeseen events overseas, such as natural disasters, could necessitate a reduction or increase in personnel. Some agency officials expressed concern that, as the Capital Security Cost Sharing costs increase, they might be priced out of an overseas presence or have to tap into their program funds to sustain such a presence. Table 2 depicts the Capital Security Cost Sharing charges for fiscal year 2007 that appear in the President’s budget for the agencies that we spoke with. Moreover, in order to mitigate the effects of ICASS fees coupled with agency budget cuts, some agency officials indicated that officials overseas are doing more administrative activities themselves, which takes time away from accomplishing their mission. In addition, agencies have sought other cost-effective ways of operating overseas, including hiring family members of staff, hiring local Americans, or utilizing locally employed staff (this final option has limitations, however). For example, according to the U.S. Marshals Service, utilizing locally employed staff over U.S. direct hires has resulted in considerable savings to the agency, and it estimates that the savings in relocation expenses, foreign housing, and other foreign entitlements, which direct hires receive but locally employed staff do not, would exceed $1 million every 3–4 years. However, agency officials indicated that there are limitations to using locally employed staff at posts to carry out some duties due to national security concerns. For example, Department of the Treasury officials said that they are not able to utilize locally hired foreign nationals to carry out the work of U.S. direct hires due to the sensitive investigative nature of their work and privacy laws. In addition, officials in the Departments of Homeland Security and Justice indicated that, due to the sensitive nature of their work, they can only allow American citizens with a security clearance to perform most of their overseas duties. As an alternative to sending additional U.S. direct hires to posts, some agencies employ eligible family members of agency staff or hire U.S. citizens already living in the host country to carry out some of the agency functions. Department of Homeland Security officials explained that utilizing U.S. citizens living in the host country is much cheaper than sending a U.S. direct hire to a post because the department does not have to pay benefits such as housing, school costs, and other allowances to locally hired U.S. citizens. In early 2004, State established the congressionally mandated Office of Rightsizing to primarily coordinate all agency staffing requests, administer rightsizing and staffing reviews, and work with State entities and other agencies on rightsizing. The basic mission of the office is to better coordinate, rationalize, and manage the deployment of U.S. government personnel overseas, under chief of mission authority. Since its formation, some of the activities of the office have included coordinating staffing requests of U.S. government agencies, developing guidance for and analyzing post rightsizing reviews, and formulating a rightsizing review plan. Non-State agencies have voiced a number of concerns related to interactions with the Office of Rightsizing, including their desire to be more involved in the rightsizing process. To better involve all agencies in rightsizing efforts and better understand their priorities, the Office of Rightsizing co-hosted an interagency summit in March 2006. In February 2004, the Office of Rightsizing was established within State. The roles and responsibilities of the office include coordinating all agency NSDD-38 requests; administering rightsizing and staffing reviews; and working with State entities and other agencies on rightsizing, regionalization, and shared service initiatives. The office started as a small operation with only a few staff; however, over the past 2 years it has grown in size. As of early May 2006, the office includes a director, three NSDD-38 analysts, and three rightsizing analysts. According to State officials, additional staffing is needed to handle the growing number of initiatives that the office is involved with. The Director of the Office of Rightsizing told us that he has requested two additional staff to work on analyzing rightsizing reviews and compiling rightsizing data, and hopes that the positions will be filled by summer 2006. Since the Office of Rightsizing was established, it has initiated a number of processes and has been involved in a number of efforts. These efforts have included administering and analyzing post reviews, formulating a review plan, developing instructions for post Mission Performance Plans, automating the NSDD-38 application process, and issuing a number of quarterly reports summarizing State’s rightsizing actions and accomplishments. In addition, the office has been involved with two State initiatives on rightsizing, which include demonstrating results achieved by moving administrative functions away from posts to remote locations; and eliminating duplicative functions at posts, also known as sharing support services. One of the principal activities of the Office of Rightsizing has been administering and analyzing post rightsizing reviews. In 2005, the office established a formal review process and guidance for all posts overseas, including new embassy construction projects. The process focuses on linking staffing to mission goals, eliminating duplication, and promoting shared services. In fiscal year 2005, about 35 reports were submitted by posts and analyzed by the Office of Rightsizing. Figure 3 shows the missions that conducted a rightsizing review in fiscal year 2005, according to the office. All of these reports pertained to posts scheduled to have construction projects for a new embassy compound or office building in the near future. According to the Office of Rightsizing, the final reports that have been analyzed by the office have been or will be submitted to Congress as State seeks budget appropriations for these projects. In fiscal year 2006, the office has tasked over 40 posts to conduct reviews—more than 20 posts for both the fall and spring cycles. The Office of Rightsizing has developed a 5-year plan, which includes the schedule of when missions will be asked to conduct reviews by fiscal year (see appendix IV). The plan is largely driven by State’s Bureau of Overseas Buildings Operations’ building schedule, but also includes consideration of posts participating in State initiatives, missions of highest priority, and countries with multiple missions, according to an official in the Office of Rightsizing. For fiscal year 2005, all of the reviews were conducted in anticipation of the post receiving a new embassy compound or building in the future. For reviews scheduled to be conducted in fiscal years 2006 through 2009, the yearly schedule dictates that posts with planned capital projects will generally perform their rightsizing reviews in the fall of the fiscal year, while those without planned projects will perform their reviews in the spring. Figure 4 provides additional information on the review cycle and steps. The Office of Rightsizing has issued guidance to posts to include rightsizing statements in their Mission Performance Plans, starting with the fiscal year 2007 submission. The guidance states that the plans must include a brief discussion of the rightsizing reviews or other rightsizing initiatives undertaken by the mission and should summarize the results, resource implications, and actions taken to implement review recommendations. However, we reviewed nine Mission Performance Plans for fiscal year 2007 and found that only one post had included discussion of any rightsizing elements that the post had undertaken. Officials in the Office of Rightsizing indicated that initially there had not been a serious effort to push for posts to provide analysis on rightsizing in their fiscal year 2007 Mission Performance Plans and, as a result, not all posts have done so. However, they said that there has been more emphasis by the office to have posts include rightsizing discussions in their 2008 plans. In May 2006, the Director of the Office of Rightsizing reported that his office is participating in the reviews of the recent Mission Performance Plans submitted by posts. The Office of Rightsizing has been working with agencies and coordinating with Chiefs of Mission at posts on NSDD-38 requests, particularly those related to new programs. These requests are submitted to the Chief of Mission for approval of any proposed changes in agencies’ staffing elements at the post. Officials in the office said that they act in an advisory capacity between the agencies that are looking to establish or increase personnel at a post and the Chief of Mission to determine if the function needs to be performed overseas. However, they told us that it is ultimately the Chief of Mission’s decision to accept or deny an agency’s request to send personnel to post. In fall 2005, the Office of Rightsizing implemented a NSDD-38 Web-enabled application so that agencies can now submit their overseas staffing requests via the Internet. According to an Office of Rightsizing document, since the application process is now standardized online—which ensures that agency NSDD-38 submissions are correct and complete—Chiefs of Mission at posts can now immediately consider the submissions. Since spring 2005, the Office of Rightsizing has published three quarterly reports that highlight State’s overall rightsizing efforts and performance, as well as summarize the accomplishments and publications of the office. The quarterly reports have also included copies of State cables sent to posts pertaining to rightsizing related issues, the Office of Management and Budget’s President’s Management Agenda rightsizing score card and summary, as well as the guidance and sample report that the Office of Rightsizing has sent to posts. The Director of the Office of Rightsizing stated that the quarterly reports are intended to provide both State bureaus and non-State agencies with an understanding of rightsizing measures and processes. The Office of Rightsizing is also involved with State’s efforts to ensure that those administrative functions that do not need to be conducted at posts are carried out from remote locations. According to State officials, potential advantages to providing support to posts from remote locations include potential cost savings, enhanced security for American personnel, and improved quality of administrative support. State currently provides remote support to many agencies at posts, primarily in the areas of financial management and human resources, from two dedicated regional service centers—the Florida Regional Center in Fort Lauderdale, Florida, and the Regional Support Center in Frankfurt, Germany. In addition, State also provides remote support of some administrative functions through partnering arrangements whereby one post with the personnel and expertise in certain administrative function assists a smaller post. In order to further expand remote support, State’s fiscal year 2006 operational plan, Organizing for Transformational Diplomacy: Rightsizing and Regionalization, identifies additional post functions that can be performed remotely to minimize the U.S. overseas footprint and reduce costs. The plan focuses on first removing non-location-specific functions—or functions that could potentially be removed from posts and carried out either from the United States or a regional center—from critical danger missions, where State officials said it is crucial to have as few personnel at posts as possible due to security concerns. The plan envisions eventually removing those functions from all overseas posts. We provide a more detailed discussion on State’s efforts and challenges to provide support remotely in a separate report. The Office of Rightsizing is also involved with State’s efforts to increase efficiencies in overseas administrative functions by identifying and eliminating duplicative management support functions among agencies, as well as overlapping or redundant program functions. Although increasing efficiencies by streamlining functions applies to all overseas agencies, State has been working primarily with USAID to reduce the duplication of overseas support services. In 2004, State, along with USAID, launched pilot programs aimed at consolidating support functions such as motor pool, warehousing, residential maintenance, and leasing services at posts in Jakarta, Indonesia; Phnom Penh, Cambodia; Cairo, Egypt; and Dar es Salaam, Tanzania. The focus of the pilot programs was to determine how State and USAID could best collaborate to realize significant savings and improved service quality. However, only one of the pilot posts succeeded in consolidating all four support functions. According to State, the pilots have established that significant operational efficiencies and some cost savings can be realized through the consolidation of duplicative services. Since the pilots at the four posts, State and USAID have identified additional consolidation opportunities. We will provide a more detailed discussion and evaluation of State and USAID’s consolidation efforts in a report that will be coming out later this year. One of the responsibilities of the Office of Rightsizing is to coordinate and manage interagency rightsizing initiatives. However, during our discussions with non-State agencies in late 2005 and early 2006, a number of agencies with an overseas presence told us that they had limited interaction with State’s Office of Rightsizing on matters aside from NSDD-38 requests. Furthermore, some non-State agency officials told us that they were not aware of the rightsizing mandate or guidance provided to posts by the Office of Rightsizing. According to the Director of the Office of Rightsizing, his office has made an effort over the last couple of months to visit or talk with many of the agencies with an overseas presence. However, we found that, in some cases, the pertinent offices were not reached. For example, officials in Department of Homeland Security’s U.S. Citizenship and Immigration Services told us that State entities tend to coordinate through one office and do not reach the various entities within the department. Both U.S. Citizenship and Immigration Services and Customs and Border Protection officials indicated that they would like to be included in any discussions that State, in particular the Office of Rightsizing, has with the Department of Homeland Security, and suggested that State designate a focal point within each Department of Homeland Security office with an overseas presence. The Office of Rightsizing indicated that the Department of Homeland Security requested that it coordinate through the department’s Office of International Affairs. It is important that agency components are receiving the necessary information to ensure that rightsizing efforts are understood. The Department of Homeland Security and the Office of Rightsizing share responsibility for developing a mechanism to get this done. Furthermore, during our discussions with agency officials in late 2005 and early 2006, non-State agencies indicated that they would like more transparency in the rightsizing review process. For example, some agencies told us that they would like to know the outcomes of the reviews at each post and know ahead of time when posts will be conducting reviews. Moreover, some agency officials stated that they are looking for an overall U.S. government strategy or vision from the Office of Rightsizing so that, as they move ahead on their own rightsizing planning and efforts, they will be in line with what the Office of Rightsizing is planning. Finally, some non- State agency officials indicated that, in order to be able to contribute to the process, they would like to see more clearly stated standards and unified processes that relate to rightsizing at posts. For example, officials said that they would like to understand how posts determine the number of staffing positions available at any given time and would like to ensure that the requests for information coming from Washington, D.C., to posts are more consistent. In order to address rightsizing in context with non-State agencies' agendas and priorities, the Office of Rightsizing and the Office of Management and Budget co-hosted an interagency summit in early March 2006. According to the Office of Rightsizing, participants included representatives from a number of foreign affairs and non-foreign-affairs agencies; discussions focused on key initiatives coordinated and managed by State, such as consolidation of duplicative functions, rightsizing reviews, the NSDD-38 process, and State regionalization efforts. While some officials from State’s regional bureaus feel that having an interagency conference is a good start at getting all agencies involved with rightsizing, they believe that additional interagency dialogue is needed. In addition, some non-State agency officials told us that the interagency summit did not provide them with a sense of a strategy for how they should move forward with their own rightsizing plan to make sure that it does not conflict with State’s rightsizing efforts. Moreover, in the course of our structured interviews, 7 out of 20 management officers identified the need for interagency involvement and agency “buy-in” at the Washington, D.C., and post level to ensure that rightsizing can move ahead at each post. For example, one management officer with whom we spoke said that he would like to see a firm, written commitment from other agency headquarters, other than State, that consolidation of services is in the best interest of every agency and is expected of posts overseas. One post noted in its rightsizing report that the success of posts’ rightsizing studies is closely linked to interagency efforts to agree on initiatives to maximize efficiency at posts. In addition, another management officer with whom we spoke said that it would be helpful to have interagency guidance on what to do when eliminating duplicative services results in overall savings to the U.S. government, but increased costs to an agency, at the post level. Some non-State agency officials said that it would be beneficial to have more frequent interagency meetings or summits, rather than just once a year. For example, a USAID official said that having an interagency summit before each rightsizing review cycle starts—one in the fall and another in the spring—could help inform non-State agencies of rightsizing changes and activities at posts that effect their agency overseas. The Director of the Office of Rightsizing told us that, while there are no immediate plans to hold more frequent interagency summits involving all agencies with an overseas presence, he plans to continue holding a rightsizing summit annually. The Office of Rightsizing also reported that it plans to implement a Washington, D.C.-based forum whereby officials from foreign affairs agencies, such as the Department of Commerce, USAID, and the Department of Agriculture, can meet regularly to share information on programs and ensure that there is a greater consistency in the information coming from headquarters. The Director of the Office of Rightsizing told us that the office could be doing more with other non-State agencies to address rightsizing issues at posts, particularly on the issue of consolidation of functions, but would first like to address issues raised as part of the joint State–USAID shared services efforts. Post rightsizing reviews are a key element of State’s rightsizing efforts. These reviews are designed to link post staffing to the mission’s goals, eliminate unnecessary duplication, and encourage shared services between agencies at posts. Our analysis of the first round of the reviews showed that there was limited guidance to posts and that there was not a systematic process for how the posts structured their reviews, though State improved its guidance for the second round of reviews. In reviewing the first and second rightsizing cycles, the Office of Rightsizing reported over $150 million dollars of cost savings or avoidance based on the result of their analysis of the reviews. Posts used a variety of methods to conduct their rightsizing reviews. Some management officers with whom we spoke identified various challenges in conducting their fall 2005 reviews and ensuring that their post is rightsized. Additionally, the Office of Rightsizing did not consider the need for posts to conduct a cost analysis as part of their reviews. It is unclear how the rightsizing review decisions, such as elimination of duplicative functions, will be implemented at each post, according to officials at post and in State’s regional bureaus. In October 2004, the Office of Rightsizing began instructing overseas missions scheduled to receive a new embassy compound to perform rightsizing reviews. The reviews are intended to eliminate or justify any duplicative or parallel functions at posts and consider the possibility for reducing U.S. government employees at posts through such means as remote services, more utilization of locally employed staff, and outsourcing. Between late 2004 and summer 2005, about 35 posts participated in the first cycle of reviews, and the office conducted formal analyses of the posts’ reviews in late 2004 and early 2005. The Office of Rightsizing reported over $50 million in costs saved or avoided to the U.S. government based on its analysis of the first cycle of reviews. Based on the analysis that the Office of Rightsizing conducted of post reviews representing eight new embassy compounds, the average reduction in desk positions for each project was 18, which resulted in 145 desk reductions overall. The office identified many of the removed desk positions in cooperation with posts and regional bureaus. According to the Office of Rightsizing, these desks represent significant partially or fully avoided costs. Of the 145 reduced positions, 50 were U.S. direct hires. Assuming an average saving of $400,000 per year for each position, the office estimated that these 50 positions represent as much as $20 million in potential costs avoided. Furthermore, the office stated that the desk positions removed represented approximately an additional $20 million in savings to the Bureau of Overseas Buildings Operations in capital security construction costs, as well as approximately $18 million in savings due to not needing to build separate annexes in three cases. The actual cost avoidance achieved will depend upon whether offsetting costs can be avoided and whether recommended staff reductions are implemented. However, based on our analysis of 20 out of about 35 rightsizing reviews that were part of the spring 2005 review cycle, we observed that there was not a systematic approach for how the posts structured their reports or how the Office of Rightsizing evaluated them. For example, the information presented within the reports varied from post to post, and the rightsizing elements that the posts evaluated and reported were not consistent. Some posts provided narratives discussing various rightsizing elements, such as outsourcing and post security, while other posts did not. Furthermore, we found that none of the posts showed that they had conducted a cost analysis as part of the post’s rightsizing efforts. In addition, we found that the Office of Rightsizing did not have a systematic process to quantify costs saved or avoided as a result of post staffing reductions stemming from reviews in the spring 2005 cycle. The Director of the Office of Rightsizing agreed that the office needs to implement a systematic process of collecting data and determining cost savings and cost avoidance for future rightsizing cycles, and has asked the rightsizing analysts in the office to document the positions that are eliminated or costs saved due to posts taking rightsizing measures into consideration. Finally, we also found that, although the goal of the President’s Management Agenda and the Office of Rightsizing is to limit the overseas presence to the minimum level necessary to accomplish the U.S. government’s mission, overseas staffing is increasing. Our analysis of the 20 reviews from the spring 2005 cycle revealed that net staffing numbers will increase for 15 posts. Increased levels in staffing abroad can be attributed to high-priority national security interests, such as the global war on terror, anti-narcotics efforts, and HIV/AIDS projects, which have implications on U.S. staffing and space at posts abroad, according to the Office of Rightsizing as well as post management officers. The Director of the Office of Rightsizing agreed that the first round of reviews was not conducted systematically, and said that the process and format has evolved since the initial guidance was provided to posts for the spring 2005 cycle. In particular, based on feedback from posts that participated in the spring 2005 review cycle, the Office of Rightsizing changed the guidance to be more systematic for the fall 2005 cycle. All management officers that we interviewed at 20 posts which had conducted a review as part of the second cycle in fall 2005 stated that the guidance was either very useful or moderately useful, and several commented that the guidance was clear, succinct, and easy to discern. Table 3 below specifies the elements that the fall 2005 cycle posts were asked to address when completing their reviews. Seventeen of the twenty management officials with whom we spoke who conducted a rightsizing review in fall 2005 said that the review helped them better understand how post personnel meet mission objectives. When asked whether they had additional comments, several management officers stated that they found the review at their post to be a useful exercise. For example, a management officer posted in Eurasia stated that the review was an interesting and useful process, which helped the post focus on parts of the Mission Performance Plan that they would otherwise not have concentrated their efforts on. The Office of Rightsizing reported over $100 million in costs saved or avoided to the U.S. government based on their analysis of the fall 2005 cycle of reviews. The estimate was based on an analysis that the Office of Rightsizing conducted of post reviews representing 21 missions. According to the office, its rightsizing efforts for the fall 2005 cycle will result in a potential reduction of 683 total desk spaces in new embassy compounds, of which 170 are U.S. direct hires. The Office of Rightsizing estimated that each eliminated U.S. direct-hire position would result in a cost avoidance of about $400,000. The actual cost avoidance achieved will depend upon whether offsetting costs can be avoided and whether recommended staff reductions are implemented. In addition, the Office of Rightsizing reported that the rightsizing actions for the fall 2005 cycle have resulted in approximately $90 million in savings to the Bureau of Overseas Buildings Operations in capital security construction costs, which includes cost saved by not needing to build four annexes. Although we have not been able to independently assess the Office of Rightsizing’s estimates, it has presented evidence that some major cost avoidance and cost savings have occurred. Figure 5 illustrates the posts from the fall 2005 review cycle where we interviewed officials as part of our analysis. Posts used a variety of means to conduct their reviews. The approach used by 19 of the 20 posts (all of which were included in the fall 2005 review cycle where we interviewed officials) incorporated the participation of the post management officer, Deputy Chief of Mission, or both. The Office of Rightsizing suggested that posts, in conducting rightsizing reviews, use the ICASS Council, working groups, or any ad hoc arrangement as a vehicle for discussion and formulation of the report and corresponding data. In addition, the Office of Rightsizing instructed posts to include all State and non-State agencies, constituent posts, and embassy offices in the posts’ rightsizing analyses. However, posts took diverse approaches to carrying out their reviews. Some posts conducted their reviews using rightsizing committees, and others directed their review to existing goal oriented discussion groups such as those that created the post’s Mission Performance Plan. The Office of Rightsizing also offered posts the opportunity to participate in digital video conferences to answer any questions that they had about the review. However, officials at 13 of the 20 posts mentioned above stated that they did not participate in a video conference with the office. Management officers said that their posts did not participate because, among other reasons, they did not find it necessary; other posts did not have the technological capabilities to participate. Moreover, several posts did not participate in a digital video conference because officials did not think that the Office of Rightsizing required it. Nonetheless, the Director of the Office of Rightsizing stated that he found that those posts that took advantage of this option had more success with their rightsizing reviews; thus, starting with the spring 2006 cycle, the office will require that every post participate in a conference. He also stated that he hopes other non- State agencies will participate in video conferences as posts conduct their rightsizing reviews. Moreover, State, as well as the Office of Management and Budget, expects all agencies at each post to understand that rightsizing is government-wide and not just a State-oriented process. In the course of our structured interviews, 7 out of 20 management officers identified the need for interagency involvement, including agency “buy-in” to ensure that rightsizing can move ahead at each post. For example, a management officer in Europe stated that it would be very helpful if there was a one- page summary that non-State agency officials at each post received from their headquarters in Washington, D.C., that describes what they need to do as part of the rightsizing review. However, we found that the Office of Rightsizing provided posts with very little guidance on interagency participation. The Director of the Office of Rightsizing stated that he also encourages posts to utilize the resources and knowledge within the office and recommends that posts, as they conduct the review, share sections of it with the office for feedback. The posts participating in the fall 2005 cycle generally took this approach, as 13 of 20 posts reported that they shared sections of their report with the office during the review process. The Office of Rightsizing also indicated that this approach may lead to more expeditious approval of reviews and that they will therefore strongly recommend that posts participating in the spring 2006 rightsizing cycle share sections of their report prior to final submission. Additionally, 15 management officers we interviewed stated that sharing best practices with other posts through online forums or receiving a visit from an expert to help guide the post through the rightsizing review process would be very useful for enhancing the rightsizing review process. A number of management officers identified various challenges in conducting the reviews at their post, including resistance by non-State agencies at posts to address rightsizing measures. Another challenge mentioned was a lack of direction and opportunities for regionalization and outsourcing of services. In addition, overlapping data requests from agencies’ headquarters, as well as redundant personnel databases, complicated the review process. Several posts that conducted a fall 2005 rightsizing review stated that they encountered resistance from non-State agencies when trying to obtain interagency involvement during the process. Specifically, four post management officers stated that other agencies were not receptive to the request to conduct a rightsizing review. According to a management officer posted in Europe, non-State agencies at the post did not want to share information on future staffing numbers for the review. The management officer added that there was a lack of recognition by agencies that rightsizing was not just a State initiative, but a government-wide initiative. Moreover, some management officers explained that they had difficulties in getting agencies to buy into the rightsizing review process and faced interagency resistance regarding the consolidation of post services. Consolidation of services proved to be a common challenge at posts, as 14 of the 20 management officers we interviewed identified duplication of post services or programs, such as motor pool and cashiering, as a result of the rightsizing review. Although these posts have identified duplicative services or programs, 10 of 20 management officers stated that the post has not taken action on consolidating the services and programs. According to management officers, common reasons why some posts have not yet consolidated these duplicative functions is because they are waiting to merge their functions once they move into their new embassy compound or because the agencies at the post could not come to an agreement about consolidation. For example, a management officer posted in Africa stated that USAID headquarters provided its staff at the post with conflicting instructions about retaining duplicative administrative support services, which countered the ongoing efforts to eliminate duplicative services at the post. Several posts that participated in the spring and fall 2005 cycle of reviews stated that they lacked direction and opportunities for the regionalization and outsourcing of services. The guidance for the rightsizing review requested posts to consider regionalization options. However, while 18 of 20 posts we interviewed stated that they rely on regional support services to meet posts’ needs, a few management officers indicated that they found it difficult to consider all regionalization options without having a base understanding of what regional services are available. For example, management officers posted in Asia and Africa stated that they lack information on what types of regional support could be provided remotely and how to access that support. Another management officer in Eurasia stated that posts should be provided a baseline listing of services so that post officials have a good sense of which services are available regionally. Though posts were instructed to assess outsourcing possibilities within their reviews, several posts that participated in the spring 2005 cycle of rightsizing reviews reported that outsourcing of services is not a good alternative due to the lack of choice, quality, and sophistication in the marketplace. Another management officer posted in Asia, whose post participated in the fall 2005 cycle of reviews, stated that outsourcing is a concept that the post is not accustomed to using and that the rightsizing review led them to examine which post functions can be outsourced. To ascertain current outsourced functions at posts, State’s Office of Global Support Services and Innovation administered an outsourcing survey to which 119 posts responded. According to the survey results, the services most likely to be outsourced were copier maintenance as well as packing and shipping, while those services least likely to be outsourced were procurement, property management, and phone billing. Officials at the Office of Global Support Services and Innovation stated that the results of the survey will serve as the Office of Rightsizing’s baseline for outsourcing. All of the management officers we interviewed stated that, within the last 2 years, they have received other requests or reviews from agency headquarters seeking information similar to that requested for the rightsizing review. Ten of twenty management officers we interviewed identified the Mission Performance Plan as another headquarters request for information similar to that of the rightsizing review, and several of these posts indicated that there was a high degree of overlap between these two data requests. Moreover, more than half of the management officers indicated that there was a high or moderate degree of overlap between the rightsizing review and other requests from the Bureau of Overseas Buildings Operations, such as the Capital Security Cost Sharing Program and the Long Range Overseas Buildings Plan data requests. As explained by several posts, it was difficult to complete and keep track of all of these overlapping data requests given the limited resources at each post. For example, a management officer at a post in Africa stated that it took approximately 3 months to complete the Capital Security Cost Sharing request and approximately 2 months to complete the Mission Performance Plan, with both requests requiring interagency staff collaboration. The officer added that the post did not have time to focus on all of the requests tasked to the post. Another management officer in Africa stated that the post would have liked to have one database that was responsive to the needs of all the requests from headquarters. In addition, 5 of the 20 posts that we interviewed stated that it would be beneficial to streamline the rightsizing review by including it with other data and information requests, and, moreover, several management officers stated that the rightsizing review should draw from information that currently exists in databases that are centrally located in Washington, D.C. The Office of Rightsizing has recognized the need for a single database and told us that State has been working on developing an integrated Post Personnel database. According to State officials, this database is expected to populate all other databases currently maintained by overseas posts to ensure all databases contain the same information. Furthermore, Office of Rightsizing officials stated that—once there is one authoritative database for all staffing data—they will no longer rely on multiple databases and, therefore, will not have to spend as much time verifying overseas staffing numbers projected in rightsizing reviews. The Office of Rightsizing did not consider the need for posts to conduct cost analyses as an essential supplement to the spring 2005 and fall 2005 reviews. The Director of the Office of Rightsizing stated that he did not want the post reviews to become a cost cutting exercise, but rather to focus more on identifying the needed resources to meet the posts’ mission and goals and justify current and projected staffing compositions. Moreover, he stated that actual cost savings at the post level would be hard to determine and that sometimes rightsizing requires an increase in staff. The guidance for the spring 2005 and fall 2005 rightsizing reviews did not require posts to evaluate costs or perform cost analyses for the review process. While none of the 20 post reports we reviewed for the spring 2005 cycle illustrated within their rightsizing report the results of a cost analysis in association with rightsizing efforts at posts, 11 of 20 posts for the fall 2005 cycle told us that they conducted some or limited cost analyses for various post staffing scenarios, such as the outsourcing of services and substitution of locally engaged staff for U.S. direct-hire positions. In addition, one management officer stated that the post conducted an analysis to determine which service provider at the post would be more cost effective. However, a management officer stated that it was difficult to perform a comprehensive analysis of the cost effectiveness of service providers because the post did not have comparable data for each provider. Moreover, another management officer in Asia stated that the post’s cost analysis was not comprehensive because post staff did not have the necessary expertise to conduct such an analysis. Moreover, a management officer added that the post would need more guidance if it were to conduct a formal cost analysis with accurate cost data. With the upcoming cycles of reviews, the Office of Rightsizing is increasingly emphasizing the need to consider costs associated with rightsizing. For example, the Director of the Office of Rightsizing directed that additional analyses should be undertaken by posts to determine the overall cost impact to the U.S. government and all customer agencies before actual consolidation of shared administrative support services can occur. This analysis should also assess whether the formation of the single service provider will, over time, be a more viable and effective option for the U.S. government. The Office of Rightsizing has not yet provided formal guidance to posts on how to conduct an analysis to determine the most cost effective service provider, but it is continually developing its guidance for posts, and in particular, has developed guidance for a rightsizing competitive sourcing business case analysis, which is being incorporated into the spring 2006 cycle of reviews. Subsequently, all posts will be required to complete this cost module as part of their rightsizing reviews, according to the Office of Rightsizing. Some management officials we interviewed, as well as officials in State’s regional bureaus, were unclear about the outcomes of the reviews. Although the reviews are intended to eliminate or justify any duplicative or parallel functions at posts and consider the possibility for reducing U.S. government employees at each post, some executive directors in State’s regional bureaus pointed out that there is no implementation plan with timelines to track rightsizing-related changes that have been identified. Most executive directors in regional bureaus believe that it should be the Office of Rightsizing’s responsibility to follow-up with posts to ensure resources have been consolidated in line with rightsized staffing levels, while one executive director in a regional bureau believes that it should be both the Office of Rightsizing as well as the bureau’s own responsibility. The Office of Rightsizing stated that it has informed posts what their staffing configurations should be as a result of their reviews and that it is the post’s responsibility to carry out corresponding staffing changes. Furthermore, the office has asked posts to include the distribution of services and schedules of when consolidation will occur within their Mission Performance Plans. The office also expects posts to establish reduction in force plans, which, according to the Office of Rightsizing, should consider attrition, retirement, and vacant positions. Moreover, Office of Rightsizing officials stated that they are planning to send a yearly cable to posts that have already completed rightsizing reviews to remind them of the need to meet agreed-upon staffing levels and to ensure that rightsizing action has been taken before the post moves into a new embassy compound. In early May 2006, the Director of the Office of Rightsizing told us that, in lieu of sending a cable, his office will be tasking posts that conducted reviews in the spring and fall 2005 cycles to develop a rightsizing action plan leading up to the completion of their new embassy compound. However, as of May 9, 2006, the office has not sent an action plan tasking to posts. As the Office of Rightsizing expects posts to develop their own rightsizing implementation plan, some posts may not adhere to the staffing figures agreed upon within their reviews. Specifically, some management officers we interviewed stated that their posts are waiting to move into a new embassy compound before taking any action to configure their post staffing numbers. For example, a management officer posted in Asia stated that the post’s plan to configure the staffing numbers as reflected in the rightsizing review is not connected to any immediate time frame, but rather will happen when the post moves into the new embassy compound. Another management officer stated that, because the construction of the post’s new embassy compound has been postponed, and because agencies at the post have not agreed to the staffing changes made by the Office of Rightsizing, no immediate post staffing changes are being contemplated. In addition to a lack of an implementation plan for those posts receiving a new embassy compound, it is also unclear what approach or incentives the Office of Rightsizing will use to enforce implementation of rightsizing measures for those posts not scheduled to receive a new embassy compound. According to an official in the Bureau of Overseas Buildings Operations, it is important for posts to have an implementation plan of how they will reconfigure their staffing, particularly for those posts that will be moving into a new embassy compound, because the new embassy or consulate will be built based on the staffing numbers approved by the Office of Rightsizing for that post. Progress has been made in implementing the President's Management Agenda initiative to rightsize the U.S. government’s overseas presence at embassies and consulates. Agencies are generally adjusting their presence based on mission, security, and cost factors. State is seeking ways to reduce support staff overseas and overseas posts are conducting rightsizing reviews required by legislation. Moreover, after a slow start, State’s Office of Rightsizing is beginning to achieve momentum in coordinating government-wide rightsizing efforts. However, more needs to be done. Of foremost importance is the need to develop accurate staffing data with which to measure staffing trends and the effects of rightsizing activities. We recognize that efforts are currently under way to develop accurate data; however because of the importance of having accurate data on overseas staffing and the length of time it has taken to develop this data, management oversight may be needed to ensure completion of this task. State’s Office of Rightsizing also needs to aggressively reach out to agencies at the headquarters level, and to overseas posts, to ensure that the positive initiatives under way are implemented effectively. To ensure that the U.S. government’s overseas presence under chief of mission authority is accurately accounted for and to ensure that the U.S. government’s rightsizing goals are being coordinated and that posts can maximize savings and gain efficiencies through rightsizing, we recommend that the Secretary of State take the following three actions: Provide oversight to ensure the timely development and use of a single database that accurately accounts for U.S. overseas personnel staffing numbers and has accountability measures to encourage posts and agencies to keep the database accurate and up to date; Increase outreach activities with non-State agencies so that all relevant agencies with an overseas presence can discuss and share information on rightsizing initiatives on a regular and continuous basis; and Require that posts develop action plans to transition to and meet the agreed upon outcomes of their rightsizing reviews. This could include developing milestones for posts reaching agreement on streamlining and eliminating duplicative functions. We provided a draft of this report to State for comment. State’s comments, along with our responses to them, can be found in appendix V. State indicated that it has either recently implemented or is taking steps to implement all of our recommendations. We received technical comments from State, the Departments of Homeland Security, the Treasury, Defense, and Justice and USAID, which we have incorporated throughout the report, where appropriate. In addition, the Department of Justice stated that it endorses our recommendation that State continue to expand its outreach to agencies and departments with an overseas presence to enhance discussion and information sharing on rightsizing initiatives. Furthermore, the Department of the Treasury stated that it would be helpful if agencies with personnel at posts developing rightsizing action plans have the opportunity for their personnel to participate in the rightsizing reviews and the development of the action plans. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to other interested members of Congress, the Library of Congress, and the Secretary of State. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and staff acknowledgments are listed in appendix VI. To ascertain the size of the U.S. government’s overseas presence we spoke with officials in State Department’s (State) Office of Rightsizing the U.S. Government Overseas Presence (Office of Rightsizing) and the Executive Director of State’s Bureau of Human Resources; we also discussed the limitations that State faces in portraying an accurate number for the overseas presence. In addition, we spoke with an official in the Office of Management and Budget to determine the methodology used to report the size and cost of the overseas presence. To determine the U.S. government’s efforts to rightsize its overseas presence we spoke with officials at a number of agencies in Washington, D.C., that have a presence at overseas posts. We spoke with officials from the the Departments of Agriculture, Commerce, the Treasury, Defense, Homeland Security, and Justice; as well as officials from the General Services Administration and the U.S. Agency for International Development. In addition, we reviewed staffing documents from numerous agencies, reports by State’s Office of Inspector General, and rightsizing documents from State’s Office of Rightsizing. We did not conduct a comprehensive review of each agencies’ rightsizing efforts. We spoke with officials in the Office of Rightsizing about the rightsizing process and reviewed rightsizing guidance and related documentation. We reviewed and analyzed nine Mission Performance Plans for fiscal year 2007, which we were able to obtain from State, to determine whether rightsizing considerations were reported in the plans. We also spoke with officials in each State regional bureau—Western Hemisphere Affairs, European and Eurasian Affairs, African Affairs, East Asian and Pacific Affairs, Near Eastern Affairs, and South and Central Asian Affairs—to gauge their involvement with post rightsizing reviews and their interaction with the Office of Rightsizing. In addition, we spoke with officials in State’s Office of Global Support Services and Innovation about State’s initiatives on regionalization and shared services. To determine costs saved or avoided as a result of rightsizing exercises, we reviewed cost data from State’s Office of Rightsizing and the Bureau of Overseas Buildings Operations. To assess the reliability of State’s data, we interviewed officials in the Office of Rightsizing and the Bureau of Overseas Buildings Operations to ascertain how the data were captured and analyzed and whether there were any limitations to the data. We determined the data to be sufficiently reliable for the purposes of our review. To determine if there was a systematic process for reporting information in the first cycle of reviews, we reviewed and analyzed 20 out of about 35 rightsizing reviews that were conducted by posts from late 2004 through summer 2005. The Office of Rightsizing provided us with over 20 reviews for our analysis. However, we analyzed only those reviews for which we had received both the post rightsizing review as well as the corresponding analysis of the review conducted by the Office of Rightsizing. Between February 2006 and March 2006 we administered 20 structured interviews regarding post rightsizing review experiences. The interviews were conducted by telephone primarily with management counselors or officers at embassies. In one case we spoke with the Deputy Chief of Mission at the post. The interviews involved 20 of the 22 posts that were part of the fall 2005 cycle and which had completed their rightsizing reviews by the time we talked to the post. The posts were located in: Asuncion, Baku, Bandar Seri Begawan, Bucharest, Bujumbura, Colombo, Harare, Jakarta, Islamabad, Kiev, Warsaw, Maputo, N’djamena, Pretoria, Reykjavik, Rome, Santo Domingo, Moscow, Taipei, and Tunis. We did not conduct interviews with two posts—Monrovia and Ankara—because these posts did not follow through with our request for interviews. The structured interview contained open- and closed-ended questions about guidance, timing, the review process, rightsizing considerations, headquarters’ involvement and feedback, and the impact of the review on the post. We developed the interview questions based on our review of rightsizing documentation and discussions with post officials during our fieldwork in Mexico City and Valletta. We provided an early version of the questions to the Office of Rightsizing and the Office of Global Support Services and Innovation for their review and comment, and we also pretested the interview with three current management officers to ensure that the questions were clear and could be answered. We modified the interview questions on the basis of the pretest results and an internal expert’s technical review. We provided the management officers and Deputy Chief of Mission with the interview questions in advance to allow them time to gather any data or information necessary for the interview. We initiated follow-up discussions with 13 posts by telephone. We subsequently sent posts follow-up questions by E-mail if we were not able to reach them by telephone. The responses of the structured interviews are not intended to be representative of all posts. We did not talk with management officers or look at rightsizing reviews for those posts that were part of the spring 2006 review cycle. We conducted fieldwork at the embassies in Mexico City, Mexico, and Valletta, Malta; and the consulate in Frankfurt, Germany, to gain a better understanding of the rightsizing process. We chose these posts because both Mexico City and Valletta had been asked by the Office of Rightsizing to conduct a rightsizing review as part of the spring 2005 cycle. We chose to visit Frankfurt because the consulate had conducted an informal rightsizing exercise in response to a report from State’s Office of Inspector General and because it is a regional hub in Europe. At each post we met with State officials as well as other agency officials involved in the rightsizing process to discuss their approach and outcomes of the review. In addition, we met with the Ambassadors in Mexico City and Valletta, as well as the Consul General in Frankfurt to understand their views and involvement in the rightsizing process. We conducted our work from May 2005 through May 2006, in accordance with generally accepted government auditing standards. GAO, Overseas Presence: Observations on a Rightsizing Framework, GAO-02-659T (Washington, D.C.: May 1, 2002). GAO, Overseas Presence: Framework for Assessing Embassy Staff Levels Can Support Rightsizing Initiatives, GAO-02-780 (Washington, D.C.: July 26, 2002). GAO, Overseas Presence: Rightsizing Framework Can Be Applied at U.S. Diplomatic Posts in Developing Countries, GAO-03-396 (Washington, D.C.: Apr. 7, 2003). GAO, Embassy Construction: Process for Determining Staffing Requirements Needs Improvement, GAO-03-411 (Washington, D.C.: Apr. 7, 2003). GAO, Overseas Presence: Systematic Processes Needed to Rightsize Posts and Guide Embassy Construction, GAO-03-582T (Washington, D.C.: Apr. 7, 2003). GAO, Overseas Presence: Rightsizing Is Key to Considering Relocation of Regional Staff to New Frankfurt Center, GAO-03-1061 (Washington, D.C.: Sept. 2, 2003). GAO, Embassy Management: Actions Are Needed to Increase Efficiency and Improve Delivery of Administrative Support Services, GAO-04-511 (Washington, D.C.: Sept. 7, 2004). GAO, Embassy Construction: Proposed Cost-Sharing Program Could Speed Construction and Reduce Staff Levels, but Some Agencies Have Concerns, GAO-05-32 (Washington, D.C.: Nov. 15, 2004). GAO, Overseas Presence: Cost Analyses and Performance Measures Are Needed to Demonstrate the Full Potential of Providing Embassy Support Remotely, GAO-06-479 (Washington, D.C.: May 2, 2006). This appendix lists the questions pertaining to mission, security, and cost that we developed in 2002 to help support rightsizing initiatives for existing facilities overseas. Physical/Technical Security of Facilities and Employees What is the threat and security profile of the embassy? Has the ability to protect personnel been a factor in determining staffing levels at the embassy? To what extent are existing office buildings secure? Is existing space being optimally utilized? Have all practical options for improving the security of facilities been considered? Do issues involving facility security put the staff at an unacceptable level of risk or limit mission accomplishment? What is the capacity level of the host country police, military, and intelligence services? Do security vulnerabilities suggest the need to reduce or relocate staff? Do health conditions in the host country pose personal security concerns that limit the number of employees that should be assigned to the post? What are the staffing levels and mission of each agency? How do agencies determine embassy staffing levels? Is there an adequate justification for the number of employees at each agency compared with the agency’s mission? Is there adequate justification for the number of direct-hire personnel devoted to support and administrative operations? What are the priorities of the embassy? Does each agency’s mission reinforce embassy priorities? To what extent are mission priorities not being sufficiently addressed due to staffing limitations or other impediments? To what extent are workload requirements validated and prioritized, and is the embassy able to balance them with core functions? Do the activities of any agencies overlap? Given embassy priorities and the staffing profile, are increases in the number of existing staff or additional agency representation needed? To what extent is it necessary for each agency to maintain its current presence in-country, given the scope of its responsibilities and its mission? Could an agency’s mission be pursued in other ways? Does an agency have regional responsibilities or is its mission entirely focused on the host country? What is the embassy’s total annual operating cost? What are the operating costs for each agency at the embassy? To what extent are agencies considering the full cost of operations in making staffing decisions? To what extent are costs commensurate with overall embassy strategic importance, with agency programs, and with specific products and services? What are the security, mission, and cost implications of relocating certain functions to the United States, regional centers, or to other locations, such as commercial space or host-country counterpart agencies? To what extent could agency program and/or routine administrative functions (procurement, logistics, and financial management functions) be handled from a regional center or other locations? Do new technologies and transportation links offer greater opportunities for operational support from other locations? Do the host country and regional environments suggest there are options for doing business differently, that is, are there adequate transportation and communications links and a vibrant private sector? To what extent is it practical to purchase embassy services from the private sector? Does the ratio of support staff to program staff at the embassy suggest opportunities for streamlining? Can functions be reengineered to provide greater efficiencies and reduce requirements for personnel? Are there best practices of other bilateral embassies or private corporations that could be adapted by the U.S. embassy? To what extent are there U.S. or host country legal, policy, or procedural obstacles that may impact the feasibility of rightsizing options? Table 4 illustrates the 5-year rightsizing schedule, by fiscal year, that the Office of Rightsizing has developed. The schedule also depicts those posts that are proposed to receive a new embassy compound (NEC) and the fiscal year that the facilities are scheduled to be built. The following are our comments on State’s letter dated June 9, 2006. 1. We modified our text to show that, although we have not been able to independently assess the Office of Rightsizing’s estimates, it has presented evidence to show that some major cost avoidance and cost savings have occurred. 2. We recognize that State has a standard methodology by which it performs cost analyses using the International Cooperative Administrative Support Services software. However, when we talked with management officers at posts that had conducted a rightsizing review, we were informed that these posts did not have comparable cost data for each service provider. In addition, we were informed that the posts did not have the necessary tools to make informed decisions about how to conduct analysis to determine the most cost effective service provider. 3. We provided the draft report to State on May 18, 2006. About two weeks later, State instructed posts to develop implementation action plans. We believe that the action that the Office of Rightsizing has taken largely addresses our recommendation. However, until the Office of Rightsizing has received all implementation action plans with the posts’ milestones, due on July 18, 2006, the office will not know what additional action might still be needed to ensure that posts meet the agreed-upon outcomes of their rightsizing reviews. 4. We understand that if a position is eliminated at a post it is not counted as part of Capital Security Cost Sharing. Our statement was simply meant to illustrate that eliminated or vacant positions could be reflected in databases used to count overseas staffing numbers. 5. Our statement reflects non-State agency views. We have amended the draft by attributing the statement to non-State agency officials. In addition our statement reflects information we obtained from a February 2006 State cable to all posts about staffing data and position charges under Capital Security Cost Sharing. 6. We have modified our text to illustrate the varying estimates of the size of the U.S. overseas presence. We have received numerous conflicting estimates on the number of U.S. government officials overseas. One source estimated that there are approximately 66,000 U.S. government personnel under chief of mission authority, while another indicated that there are approximately 69,000. We understand that some of the numbers may come from different estimates and data sources. State’s discussion on staffing data illustrates the difficulty of obtaining an accurate count of overseas personnel. We recognize that that there are vacant positions and that the total number of positions is higher than the number of filled positions. We also note in our report that State is in the process of eliminating vacant positions. It is important that State continue to update its staffing database to ensure that a more accurate accounting of U.S. government personnel overseas is available. 7. We acknowledge that State has sent several messages since 2004 to posts instructing them that Post Personnel is the official database for documenting all U.S. government staffing overseas. However, in February 2006, State reported that not all posts are using Post Personnel as their main human resources system. In addition, we were told that the guidance provided to posts did not include accountability mechanisms for ensuring that the staffing information is updated and complete. 8. It is important that all components of each agency receive information from the Office of Rightsizing that pertains to rightsizing review efforts and initiatives. During the course of our work at the Department of Homeland Security it became clear that certain components within the department had not received information on rightsizing. We understand that the Department of Homeland Security has a central focal point that the Office of Rightsizing works with. The Department of Homeland Security and the Office of Rightsizing share responsibility in ensuring that agency components are receiving the necessary information to ensure that rightsizing efforts are understood. We have modified the text to indicate that the Office of Rightsizing was asked by the Department of Homeland Security to coordinate through one focal point. 9. We believe that the actions and measures that the Office of Rightsizing is taking, particularly the Interagency Rightsizing Summit, are useful steps to implementing our recommendation. However, based on our discussions with non-State agencies, we maintain that more outreach is needed pertaining to areas involving rightsizing review efforts, strategy, and vision. In addition to the individual named above, John Brummet, Assistant Director, Ann Baker, Joseph Carney, Virginia Chanley, Lyric Clark, Martin De Alteriis, Etana Finkler, Beth Hoffman León, Ernie Jackson, Andrea Miller, and Deborah Owolabi made key contributions to this report.
|
In 2001, the administration identified the rightsizing of embassies and consulates as one of the President's management priorities. Rightsizing initiatives include: aligning staff overseas with foreign policy priorities and security and other constraints; demonstrating results by moving administrative functions from posts to regional or central locations; and eliminating duplicative functions at posts. This report (1) discusses the size and recent trends in the U.S. government overseas presence, (2) assesses the congressionally mandated Office of Rightsizing's progress in managing the U.S. government's overseas rightsizing efforts, and (3) assesses the process and outcomes of the legislatively mandated rightsizing reviews of overseas posts. Almost five years into the President's Management Initiative on rightsizing, the U.S. government does not yet have accurate data on the size of the U.S. overseas presence. At various times, we received estimates ranging from 66,000 to 69,000 American and non-American personnel. In addition, State estimated that there are approximately 78,000 U.S. government positions overseas, as of December 2005. State Department (State) officials said that they are working on a unified database which, if periodically updated by posts, will provide an accurate depiction of the overseas presence. State officials indicated that the database will be completed later this year. Because of the importance of having accurate data on overseas staffing and the length of time it has taken to develop this data, management oversight may be needed to ensure completion of this task. Several agencies reported that they have added staff overseas as a result of new mission requirements, and other agencies reported that they have repositioned their personnel to better meet mission needs and in response to rightsizing efforts. State established the congressionally mandated Office of Rightsizing the United States Government Overseas Presence (Office of Rightsizing) in 2004, which, after a slow start, has begun to provide overall direction to the government-wide rightsizing process. Some of the office's activities have included coordinating staffing requests of U.S. government agencies, developing guidance for and analyzing post rightsizing reviews, and formulating a rightsizing review plan. We found that coordination on rightsizing issues between State and other agencies with an overseas presence was initially slow, but has since improved. Nevertheless, non-State agencies have voiced a number of concerns regarding their interaction with the Office of Rightsizing, including their desire to be more included in the rightsizing process. Congress requires Chiefs of Mission to conduct rightsizing reviews at every overseas post at least once every 5 years. Between late 2004 and summer 2005, about 35 posts participated in the first cycle of reviews. However, the Office of Rightsizing provided limited guidance to posts on how the reviews should be conducted and did not have a systematic process for reporting the outcomes of the reviews. In fall 2005, officials in the Office of Rightsizing developed more comprehensive guidance, which posts we interviewed found useful. We found that cost was not considered a key element in the post reviews. Nevertheless, the Office of Rightsizing reported over $150 million in cost savings or avoidance to the U.S. government based on its analysis of these reviews. Although we have not been able to independently assess the Office of Rightsizing's estimates, it has presented evidence to show that some major cost avoidance and cost savings have occurred. Management officers identified various challenges to the review process, such as resistance from non-State agencies and a lack of time to conduct the review. It is unclear how posts will implement the rightsizing review decisions, such as elimination of duplicative functions, according to post officials and officials in State's regional bureaus.
|
H.R. 3947 would require the GSA administrator to take a leadership role, in consultation with the heads of other federal agencies and the director of the Office of Management and Budget (OMB), in establishing and maintaining a current set of real property asset management principles. We support this provision. Agencies would use these principles as guidance in making decisions about property planning, acquisition, use, maintenance, and disposal. The bill would also require the GSA administrator, in consultation with the heads of other landholding agencies, to establish performance measures to determine the effectiveness of federal real property management. Performance measures could address such areas as operating costs, security, occupancy rates, and tenant satisfaction. The performance measures should enable Congress and heads of federal agencies to track progress in the achievement of property management objectives on a governmentwide basis. This should allow Congress and the agencies to compare federal agencies’ performance against the performance of private sector and other public sector agencies. In addition, these provisions would emphasize the importance of effectively managing the government’s multibillion-dollar portfolio of federal real property assets, help facilitate a uniform approach to asset management, and assist federal managers in monitoring progress and measuring results. Another important provision in H.R. 3947, which we support, is the establishment of a senior real property officer in each landholding agency that emphasizes the importance of having someone with real property experience oversee agencies’ real property assets. This bill includes qualification requirements for the senior real property officer, such as real estate portfolio or facilities management experience. The senior real property officer would continually monitor real property assets to ensure that they are being used and invested in a way that supports the goals and objectives of the agency’s strategic plan. This provision would make federal agencies with real property holdings accountable for the management and oversight of their real property assets. One important feature of having senior real property officers is that they can be held accountable for providing reliable, useful, and timely data on their agencies’ real property assets to GSA for inclusion into its worldwide inventory. As you know, using data from over 30 real property-holding agencies, GSA maintains a governmentwide real property database commonly referred to as the worldwide inventory. This database is the only central source of descriptive data of governmentwide real property assets. As we found during our recently completed review of this inventory, which I will discuss later in more detail, decisionmakers, including Congress and OMB, currently do not have access to quality data for strategic management and budgeting purposes. Attempting to strategically manage and budget for the government’s vast and diverse portfolio without quality data puts the government’s real property operations at risk and can be likened to navigating the oceans of the world without the benefit of oceanographic charts. Although the senior real property officers would be responsible for developing their agencies’ real property asset management plans, there also is a need for guidance in establishing standards for the plans so they are developed in a consistent manner. The adequacy of these plans will play a key role in improving real property management and oversight throughout the government. Consequently, this would provide GSA with an opportunity, in its role as the government’s real property manager, to develop and provide specific guidance for agencies to use in preparing their real property management plans. This guidance should describe the types of analyses to be included in the plans to support planned actions to be taken and conclusions reached in the plans. For example, the plan should include a discussion of the benefits to the agency or government that would result from the proposed actions, and it should provide an analysis of the asset performance necessary to deliver the required service outcomes over the duration of the asset strategy-planning period. We believe such guidance would help ensure that the strategic actions that agencies plan to take relative to their properties will best meet the intended service delivery outcomes defined in their strategic plans. We envision that the senior real property officers would work together with three other senior agency officials—the chief financial officer (CFO), the chief information officer (CIO), and the head of human resources—to integrate the strategic planning and management of facilities, financial management, technology, and human capital to ensure that the agencies’ asset management plans are linked to the agencies’ overall missions and strategic plans. Given the significant responsibilities foreseen for senior real property officers, we believe that in addition to the qualification requirements specified in the bill, the officers should also have a recognized professional designation or certification, such as certified facility manager or real property administrator. H.R. 3947 would require the GSA administrator to establish and maintain a single, comprehensive, and descriptive inventory database of all real property interests under the custody and control of each federal agency. Subject to certain limitations, and as deemed appropriate by the administrator, portions of this database would be available to interested stakeholders and the public. We believe that a comprehensive, reliable listing of federal properties, as envisioned by H.R. 3947, is essential for the government to oversee and manage its large portfolio of federal assets. Lack of good data makes it difficult for the government to select the optimal level of capital spending needed for the acquisition and maintenance of real property. Inadequate data also impede the government’s ability to identify real property assets that are no longer needed or cost effective to retain. As I previously mentioned, GSA currently maintains a worldwide inventory of real property holdings. This week we reported that GSA’s worldwide inventory of federal real property contained data that were unreliable and of limited usefulness. Worldwide inventory data for 12 of the 31 reporting agencies, which held an estimated 32 percent of the inventory in terms of building square footage, were not current in the most recent inventory report. In addition, the inventory did not contain key data–such as data related to space utilization, facility condition, historical significance, security, and facility age–that would be useful for budgeting purposes and the strategic management of these assets. Given this, decisionmakers, including Congress and OMB, do not have access to quality data on what real property assets the government owns; their value; whether the assets are being used efficiently; and what costs are involved in preserving, protecting, and investing in them. Without quality data, decisionmakers have difficulty strategically managing and budgeting for such significant real property management issues as deteriorating federal buildings, disposal of underutilized and unneeded properties, and the protection of people and facilities. Consequently, we recommended, among other things, that the administrator of GSA exercise strong leadership and work with Congress, OMB, the Department of the Treasury (Treasury), and real property- holding agencies to design a cost-effective strategy for developing and implementing a reliable, timely, and useful governmentwide real property database. GSA agreed with the report’s recommendations. Because there is a concern that GSA lacks specific statutory authority to compile the inventory, we also asked Congress to consider enacting legislation requiring GSA to maintain an accurate and up-to-date governmentwide inventory of real property assets and requiring real property-holding agencies to submit reliable data on their real property assets to GSA. This would give GSA added leverage in obtaining the data it needs from other federal agencies. GSA recognizes the problems associated with the worldwide inventory and has proposed several legislative initiatives in recent years to help correct the problems. This provision in H.R. 3947, if effectively implemented, can help GSA make the worldwide inventory a valuable resource. However, it is important to recognize that even if this provision is enacted, GSA will face formidable challenges in compiling reliable, timely, and useful data on federal real property. GSA will be challenged to identify and compile this data in a manner that the many real property- holding agencies, Congress, the Treasury, and OMB, agree is cost effective. Another challenge for GSA would be to work with participating agencies to make their real property databases capable of producing the common data that are needed to make the worldwide inventory an effective and valued resource. H.R. 3947 would also provide agencies with enhanced asset management tools and incentives for better property management. These proposed changes would give agencies the flexibility to establish real property portfolios that most appropriately, effectively, and efficiently meet the agencies’ mission requirements. The bill provides four new enhanced asset management tools for effective management of federal property: (1) interagency transfers or exchanges, (2) sales to or exchanges with nonfederal sources, (3) subleases, and (4) outleases and public-private partnerships. In addition, H.R. 3947 provides incentives for agencies to use these enhanced asset management tools and dispose of excess property by allowing them to retain proceeds generated to pay expenses associated with the property and fund other capital needs. Currently, the law for most federal agencies requires that proceeds from the sale of federal land and buildings go either to the general treasury or the Land and Water Conservation Fund. Within the last year we have issued two reports that addressed issues related to one of the enhanced management tools proposed in H.R. 3947— public-private partnerships. In our report on repairs and alterations, we said that GSA faced long-standing obstacles, including limited funding, in reducing its multibillion-dollar inventory of repair and alteration needs. In this report, we asked Congress to consider providing the administrator of GSA the authority to experiment with funding alternatives, such as exploring public-private partnerships when they reflect the best economic value available for the federal government. The other report identified the potential benefits of allowing federal agencies to enter into public-private partnerships. A public-private partnership allows the federal government to lease federal property to a nongovernmental entity to develop, rehabilitate, or renovate the facilities on that property for use by federal agencies and/or private sector tenants. We hired consultants to develop and analyze hypothetical partnership scenarios for 10 judgmentally selected GSA properties. Appendix I contains a flowchart that shows how a public-private partnership may be structured. This work showed that 8 of these properties were potential candidates for a public-private partnership, and 2 did not appear to be viable candidates. We identified several potential net benefits to the federal government of entering into these public-private partnerships. These potential benefits included improved space, lower operating costs, and the conversion of buildings that are currently a net cost to GSA into net revenue producers. Location in a strong office real estate market with demand for federal and nonfederal office space and untapped value in underperforming assets were two key factors when considering properties for partnership opportunities. However, public- private partnerships will not necessarily be the best option available to address all real property issues. Ultimately, public-private partnerships and all other alternatives such as federal financing through appropriations or sales or exchanges of property would need to be carefully evaluated to determine which option offers the best economic value for the government. Public-private partnership arrangements are not new to some federal agencies. Congress has previously provided statutory authority for some specific public-private partnership projects. In addition, Congress has enacted legislation that gives VA and the Department of Defense (DOD) specific statutory authority to enter into such partnerships. We are currently evaluating issues related to DOD’s implementation of the military housing privatization initiative. H.R. 3947 would give the administrator of GSA the sole discretion to review and disapprove any transaction by agencies proposing to use enhanced asset management tools. The bill would require agencies to consult with GSA when developing their business plans for specific properties when they intend to use any enhanced asset management tools specified in the bill. A business plan outlines the scope of the project from an output and cost perspective, analyzes the cost and benefits associated with the project, and demonstrates that it has net benefit. In addition, a business plan should include an overview of the structure of the proposed arrangements as well as other elements. Consequently, the business plan is the key step in the decision-making process. Given GSA’s role as the government’s real property manager, other agencies would naturally look to GSA to develop and provide specific written guidance on how to develop their business plans. We believe that federal asset managers need the proper tools, expertise, and knowledge to effectively manage and oversee federal assets. Given this, the tools provided in H.R. 3947 are steps in the right direction for agencies to begin exploring opportunities to better utilize federal assets. However, it is important to recognize that enhanced asset management tools may result in complex real property transactions. For example, in structuring public-private partnerships for individual properties, it must be remembered that each property is unique and thus will have unique issues that will need to be negotiated and addressed as the partnership is formed. In addition, great care will need to be taken in structuring partnerships to protect the interests of both the federal government and the private sector partner. The senior real property officers will need to have access to individuals with the appropriate knowledge, skills, and expertise when they decide to explore more complex real estate transactions authorized by the bill. The proposed enhanced asset management tools and other asset management tools currently available for real property management will also need to be carefully evaluated to ensure that they provide the best economic value and outcome for the government. As I discussed before, H.R. 3947 would allow agencies to retain proceeds generated from the transfer or disposition of their property. Under the bill, agencies would be authorized reimbursement for their costs of disposing of their property. The remaining proceeds would be deposited in agencies’ capital asset accounts that would be authorized by the bill and could be used to fund capital asset expenditures, including expenses related to capital acquisitions, improvements, and dispositions. These accounts would remain available until expended. In our April and July 2001 reports, we asked Congress to consider allowing GSA to retain the funds it received from real property transactions. Accordingly, we support the intent of these provisions. However, it is important to have effective congressional oversight over any receipts retained by agencies from real property transactions. In considering whether to allow federal agencies to retain the proceeds from real property transactions, it is important for Congress to ensure that it retains appropriate control and oversight over these funds, including the ability to redistribute the funds to accommodate changing needs if necessary. Congress has done this by using the appropriations process to review and approve agencies’ proposed use of the proceeds from real property transactions. Another approach could be for Congress to require agencies to submit plans on how they intend to use the proceeds in their capital accounts and report on the actual use of the proceeds. H.R. 3947 makes no distinction between facilities and land in permitting agencies to retain asset sales proceeds. Since our work has focused on facilities, our conclusions regarding sales proceeds are limited to facility sales. Specific issues related to the retention of land sales proceeds may need to be studied further and separately addressed. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Subcommittee may have. For information about this testimony, please contact Bernard L. Ungar, Director, Physical Infrastructure Issues, on (202) 512-8387 or at [email protected]. Individuals making key contributions to this testimony included Ron King, Maria Edelstein, Susan Michal-Smith, David Sausville, Gerald Stankosky, and Lisa Wright-Solomon.
|
The Federal Property Asset Management Reform Act of 2002, will enhance federal real and personal property management and bring the policies and business practices of federal agencies into the 21st century. Available data show that the federal government owns hundreds of thousands of properties worldwide, including military installations, office buildings, laboratories, courthouses, embassies, postal facilities, national parks, forests, and other public lands, estimated to be worth billions of dollars. Most of this government-owned real property is under the custody and control of eight agencies--the Department of Agriculture, Defense, Energy, the Interior, and Veterans Affairs; General Services Administration; the Tennessee Valley Authority; and the U.S. Postal Service. Federal property managers have a large deferred maintenance backlog, obsolete and underutilized properties, and changing facility needs due to rapid advances in technology. It is important that real property-holding agencies link their real property strategic plans to their missions and related capital management and performance plans; ensure that senior real property officers have the knowledge, skills, and expertise needed to effectively perform their duties; are accountable for the reliability, usefulness, and timeliness of their data; and adopt an effective process to monitor and evaluate any management tool authorized by the bill. It is equally important that GSA provides written guidance to agencies on the development of their business and asset management plans and that Congress provide appropriate control and oversight of intended and actual use of the funds retained from real property transactions.
|
In 1978, under the Airline Deregulation Act, the United States deregulated its domestic airline industry. The main purpose of deregulation was to remove government control and open the air transport industry to market forces. Previously, the Civil Aeronautics Board regulated all domestic air transport, controlling fares and setting routes. In this regulated market, airlines competed more through advertising and onboard services than through fares. When the industry was deregulated, “legacy” airlines carried over the cost structures that had been protected by price regulation. Similar to other highly regulated industries, the airline industry was heavily unionized, with a highly trained and stable workforce. By contrast, carriers that started operations after deregulation sought to attract passengers from legacy network carriers and to stimulate new passenger traffic—and did so—by offering lower fares. These airlines generally paid less for labor, on a unit cost basis, which helped them keep their overall operating costs low. In August 2004, we reported on the financial condition of the airline industry. High-end demand for air travel had begun weakening in 2000 because of an economic turndown, and demand dropped significantly following the September 11, 2001, terrorist attacks; the war in Iraq; and the outbreak of SARS. We found that in response to changing market conditions, legacy airlines had reduced costs, but mostly by reducing capacity and not nearly enough to be competitive with low cost airlines. Low cost airlines experienced significant growth and a fall in their unit costs as measured by cost per available seat-mile (CASM), whereas legacy airlines’ unit costs did not improve. In addition, we found that neither legacy nor low cost airlines possessed much pricing power and suffered declining unit revenue. As a result of their weak financial performance and mounting losses, legacy airlines saw their financial liquidity and solvency seriously deteriorate even as their debt and pension obligations mounted. Since our 2004 report was issued, losses have continued to mount for airlines even though traffic levels have returned to pre-9/11 levels. One of the primary culprits has been record fuel prices, nearly doubling since 2003 (see fig. 1). Low fares have affected revenues for both legacy and low cost airlines. Yields, the amount of revenue airlines collect for every mile a passenger travels, fell for both low cost and legacy airlines from 2000 through 2004 (see fig. 2). However, the decline has been greater for legacy airlines than for low cost airlines. Only during the first half of 2005 has stronger demand allowed airlines to increase fares sufficiently to boost their yields. Legacy airlines, as a group, have been unsuccessful in reducing their costs to become more competitive with low cost airlines. Unit-cost competitiveness is essential to profitability for airlines after years of declining yields. While legacy airlines have been able to reduce their overall costs since 2001, they have done so largely by reducing capacity and without improving their unit costs as compared to low cost airlines. Meanwhile, low cost airlines have been able to maintain low unit costs by continuing to grow and maintaining high productivity. As a result, low cost airlines have been able to sustain a unit-cost advantage over their legacy rivals (see fig. 3). In 2004, low cost airlines maintained a 2.7 cent advantage per available seat mile over legacy airlines. This advantage is attributable to lower overall costs and greater labor and asset productivity. Thus far in 2005, airlines have been able to trim most of their nonfuel-related costs, but high fuel prices and debt interest charges have kept airlines’ costs from falling. Weak revenues and the inability to realize greater unit-cost savings have combined to produce unprecedented losses for legacy airlines. At the same time, low cost airlines have been able to continue producing modest profits (see fig. 4). Legacy airlines have incurred a cumulative $28 billion in operating losses since 2001. Despite a modest recovery for some airlines during the first half of 2005, analysts predict the industry will lose another $5 billion to $9 billion in 2005. Owing to continued losses, legacy airlines built cash balances not through operations but by borrowing. Legacy airlines have lost cash from operations and compensated for operating losses by taking on additional debt, relying on creditors for more of their capital needs than in the past. In doing so, several legacy airlines have used all, or nearly all, of their assets as collateral, potentially limiting their future access to capital markets. Airlines (and other businesses) that are unable to operate profitably over time may seek recourse under the U.S. Bankruptcy Code. In general, two major provisions of the bankruptcy code govern actions taken by airlines and other businesses: Chapter 7 of the code governs liquidation of the debtor’s estate and is often referred to as a “straight bankruptcy.” A trustee is appointed to sell off available assets to repay creditors. is designed to accommodate complicated reorganizations of publicly held corporations. Among other things, it allows companies, with court approval, to reject agreements made under collective bargaining and renegotiate contracts with other creditors. With the approval of the bankruptcy courts (which administer the bankruptcy laws), companies may also modify retiree benefits. Airline bankruptcies typically include a large number of stakeholders. The primary stakeholder is the airline itself, known as the debtor-in-possession. Federal stakeholders include the bankruptcy judge, who presides over the administration of the case and decides contested aspects, and the U.S. Trustee, whose duties include ensuring the integrity of the process and approving the retention of professionals (e.g., bankruptcy attorneys). During this most recent round of airline bankruptcies, two additional governmental entities have become major stakeholders in airline bankruptcies: the Air Transportation Stabilization Board (ATSB), which was formed after September 11 to administer a $10 billion loan guarantee program for airlines, and PBGC, which insures defined benefit pension plans. Both agencies have taken ownership stakes in bankrupt and nonbankrupt airlines through ATSB’s loan guarantees and PBGC’s taking over defined benefit pension plans terminated in bankruptcy. The entities that provide the financing while an airline is in bankruptcy (known as debtor-in-possession financing) and upon its exit (exit financing) are also major stakeholders, as are airline employees, many of whom are represented by labor unions. Other secured and nonsecured creditors and shareholders are also stakeholders in an airline bankruptcy. The interests of unsecured creditors (including labor) and shareholders are represented in the process by committees appointed by the U.S. Trustee. Among the largest cost elements for both legacy airlines and low cost airlines are those associated with employee compensation and benefits. As part of the retirement benefits offered, legacy airlines have tended to offer “defined benefit plans” and supplemental defined contribution plans, whereas low cost airlines tend to provide only “defined contribution plans.” Defined benefit plans typically provide participants with an annuity at retirement—a series of periodic payments over a specified period of time or for the life of the participant. As designed, defined benefit plan annuities are generally based on a participant’s retirement age, number of years of employment, and salary. As of December 31, 2004, nine major airlines sponsored defined benefit plans for their employees: Aloha, Alaska, American, Continental, Delta, Hawaii, Northwest, US Airways, and United. These airlines generally offered different pension plans for different groups of employees—pilots, machinists, and flight attendants, for example—with varying levels of promised benefits. Defined contribution plans base pension benefits on the contributions to and investment returns on individual accounts. Contributions may consist of pretax or after-tax employee contributions, employer matching contributions that require employee contributions, and other employer contributions that may be made independent of any participant contributions. In a defined contribution plan, the employee bears the investment risk and often controls how the individual account assets are invested. PBGC was established to encourage the continuation and maintenance of voluntary private pension plans and to insure the benefits of workers and retirees in defined benefit plans should plan sponsors fail to pay benefits. However, if a pension plan’s assets are insufficient to pay accrued benefits, the plan can be terminated under certain conditions, and PBGC then assumes responsibility for paying retiree pensions. PBGC may pay only a portion of the benefits originally promised to employees and retirees. For 2005, the maximum statutory limit of annual benefits guaranteed by PBGC is $45,613.68 per participant, for retirement at age 65. The amount paid decreases at earlier retirement ages. Bankruptcy filings are prevalent in the U.S. airline industry because of long- standing economic structural issues that have led to historically weak financial performance for the industry. Structurally, the airline industry is characterized by high fixed costs, cyclical demand for its services, intense competition, and vulnerability to external shocks. As a result, airlines have been more prone to failure than many other businesses, and the sector’s financial performance has continually been very weak. Airlines frequently seek bankruptcy protection because of severe liquidity pressures, but while bankruptcy may provide some immediate protection from creditors, airlines in bankruptcy have not always been able to reduce their costs or avoid liquidation. Owing to the long history of airline bankruptcies, the process is well developed, and the code includes provisions applicable just to airline bankruptcies. Even so, the process can be lengthy and contentious—for example, United is in its third year of bankruptcy, and its process to date has included litigation over aircraft repossessions as well as employee pensions. Since the 1978 economic deregulation of the U.S. airline industry, airline bankruptcy filings have become prevalent in the United States, and airlines fail at a higher rate than companies in most other industries. This has been particularly true for small, new entrant carriers. Since 1978, there have been 162 airline bankruptcy filings in the United States, 22 of them since 2000. Most of these bankruptcies were chapter 11 filings by small, new- entrant airlines that eventually liquidated. Only 24 of the filings were by airlines with over $100 million in assets; however, 12 of these large bankruptcies were filed after 2000 (see table 1). Because of certain structural characteristics, including its susceptibility to external shocks and historically weak financial performance, the airline industry is more prone to failure than many other types of businesses. Airlines have high fixed costs and are subject to highly cyclical demand and intense competition. Compounding these other structural problems is the industry’s vulnerability to external shocks—such as terrorist attacks or war—that decrease demand and increase costs. The result is that the airline industry has had the worst financial performance of any major industry. Structural characteristics of the airline industry have resulted in repeated cycles of boom and bust as its high fixed costs and particular sensitivity to seasonal and business cycle changes strain declining revenues. External shocks such as the Iraq War and the SARS epidemic have exacerbated the situation. Operating an airline requires expensive equipment and facilities as well as large numbers of people to operate them. Aircraft are very expensive—for example, the 2005 list price for a Boeing 777 ranges from $171 million to $253 million—and, therefore, airlines use outside financing to acquire a fleet. In the United States, airlines typically use operating leases, loans, or public financing instruments to fund their aircraft. Servicing these leases or debt instruments requires considerable and regular cash payments regardless of how extensively the aircraft are used. Airlines also rely on specialists like pilots and mechanics who cannot be easily replaced, making labor force adjustments to changes in demand more difficult. In addition, the workers of many carriers, particularly those of the legacy carriers, are covered by multiyear collective bargaining agreements. While such agreements may provide important protections to employees, they may limit carriers’ ability to respond quickly to cyclical changes in demand, much less unanticipated shocks like the September 11 attacks or SARS. Together, these characteristics result in long-term high fixed costs for an industry whose fortunes fluctuate with the business cycle. The airline industry is very competitive and has become increasingly so with the emergence of low cost airlines and the relative ease with which new airlines gain access to capital and enter the industry. It is difficult for airlines to reduce their capacity because of the high fixed costs and low variable costs of providing service. Capacity increases by individual airlines are frequently matched by competitors. Low cost airlines grew over the last 5 years, from 10.8 percent of domestic capacity in 1998 to 17.5 percent of domestic capacity in 2004. Low cost airlines have been able to maintain their low costs by continuing to grow. Finally, despite historic losses in the industry, new airlines are still willing to enter the market. As of July 2005, seven carriers were obtaining operating certificates, while at least one other had obtained its operating certificate but was not yet operating. It is uncertain if and when these carriers will actually begin service. These carriers plan to provide domestic and international scheduled and charter service. These new airlines are indicative of the willingness of capital providers to finance aircraft despite the industry’s continued losses. Demand for air travel is closely tied to the business cycle and is subject to external shocks. So while airlines’ most prominent costs—for aircraft and labor—are locked into fixed payments and multiyear contracts, airline revenues fluctuate because demand is cyclical. External demand shocks can have a devastating impact on airline finances. For example, beginning in 2000, an economic downturn precipitated a decrease in high-end demand for air travel, while the terrorist attacks of September 11, the Iraq War, and the outbreak of SARS compounded that trend. These events contributed to the 22 airline bankruptcy filings since 2000. The structural issues discussed in the previous section have contributed to the airline industry’s historically poor financial performance and higher- than-average industry failure rate. This performance is illustrated by the industry’s weak revenues and lack of profitability. In particular, legacy airlines in aggregate have experienced operating losses in all quarters but one since September 11, 2001. A return to profitability that some financial analysts expected for legacy airlines in 2004 and 2005 has not materialized, in large part because of historically high oil prices. One way to measure the inherent instability of the airline industry is to compare its operating ratio with that of other industries. The operating ratio is the ratio of a company’s operating expenses to its operating revenues. One study found that from 1983 through 2001, the airline industry had the highest risk in relation to return of any industry sector when measured using this ratio. This study found that the airline industry had an operating ratio of 97 percent, well above the average of 83.5 percent for all other industries. Evidence of the volatility and weak financial performance of the airline industry can also be found by comparing airline failure rates with overall U.S. business failure rates. For 1997, the last year in which Dun & Bradstreet produced these data, the overall U.S. business failure rate was 0.9 percent, while the failure rate for the airline industry was three times greater, at 2.9 percent. Although we do not have overall business failure rates for more recent years, there is no reason to believe that the disparity between the rates has changed significantly since 1997 (see fig. 5). Bankruptcy has played a prominent role in the U.S. airline industry since deregulation because many carriers have used the bankruptcy code in an effort to restructure their operations and cut costs—by, for example, terminating defined pension benefit plans and rejecting high-cost aircraft leases. These carriers have met with varying degrees of success. Prominent examples include US Airways, which has entered chapter 11 twice since 2002 and has merged with America West Airlines, which itself went through bankruptcy 11 years before; United Airlines, which is hoping to emerge from bankruptcy in 2006 after more than 3 years in bankruptcy; and TWA, which entered bankruptcy three times before its assets were eventually acquired by American Airlines in 2001. Generally, major airlines have been able to reduce their costs during bankruptcy. Reductions in operating expenses were generally achieved through reductions in wages and in capacity. In eight of the nine largest airline bankruptcies over the last 25 years, operating expenses and capacity were reduced (see table 2). The exception was the first Continental Airlines bankruptcy, when the airline’s capacity doubled but expenses rose by only one-third. Typically, cost savings were achieved disproportionately by cutting wages—in six of the nine cases, reductions in wages were greater than the overall reduction in operating expenses. Most critically, however, unit costs were reduced in only five of the nine cases, and in two cases (TWA 1 and US Airways 1) unit costs went up and by more than the industry average, perhaps explaining why those airlines filed for bankruptcy again within 2 years. Most airlines file to reorganize their operations and finances under chapter 11 of the bankruptcy code, some sections of which will change under the new bankruptcy law that comes into effect in October 2005. Given the number of airline bankruptcies that have occurred over the last 20 years, the process is well developed and understood by those involved, but it can still be quite contentious. Most U.S. airlines that are in financial distress and choose to file for bankruptcy protection file under chapter 11 of the U.S. bankruptcy code. Chapter 11 provides protection from creditors and allows a company to reorganize itself and become profitable again. Management—as the debtor- in-possession—continues to run the airline, but all significant decisions must be approved by the bankruptcy court. In a chapter 7 filing, the airline stops all operations and a trustee is appointed to sell the assets to pay off the debt. According to SEC, most publicly held companies will file under chapter 11 rather than chapter 7 because they can still run their business and control the bankruptcy process. For airlines, 148 of the 162 bankruptcy filings since 1978 were chapter 11 filings. Several sections of the bankruptcy code have played a prominent role in airline bankruptcies. Section 362—the automatic stay provision—gives an airline breathing room from its creditors by stopping all collection efforts and foreclosure actions and permitting the debtor to attempt to develop a repayment plan. Under section 1121, the airline’s management—or the private trustee if one has been appointed—currently has the exclusive right to file a reorganization plan for 120 days following the filing of the bankruptcy petition; this period may be extended for cause. Other parties- in-interest may file a plan if 120 days have elapsed without the debtor’s filing a plan or if 180 days have elapsed and the debtor’s plan has not been accepted by each class of creditors. This period may also be extended for cause. Other sections of the code govern actions an airline might take to restructure its operations and lower its costs in order to emerge from bankruptcy. For example, section 1113 governs the rejection of labor contracts and requires that the airline complete certain steps before requesting that the court abrogate contracts. Section 1110 gives an airline 60 days to accept or reject aircraft leases, which allows the airline to continue to operate without fear that its chief assets will be repossessed. Additionally, several subsections of section 365 currently relate to airline leases of aircraft terminals and gates. For example, an airline that leases more than one terminal or gate may not assume or assign the leases unless it assumes or assigns all of them to the same entity, which limits the ability of an airline to realize the full value of its leases. To emerge from bankruptcy, the airline devises and obtains approval of a reorganization plan from the bankruptcy court and obtains exit financing, which is used to operate the company once it is no longer within the jurisdiction of the bankruptcy court. The airline bankruptcy process has been honed over the past 27 years as carriers, large and small, have built on prior experiences and expertise. We interviewed numerous industry experts (attorneys, consultants, analysts, and current and former airline officials), many of whom have had experience in more than one airline bankruptcy. Additionally, several of these experts confirmed that the case law and documents produced by each bankruptcy case provide a body of expertise available for subsequent filers. They indicated that this documentation serves as precedent that is useful even though each bankruptcy case is unique. The process can also be contentious as the various stakeholders compete for their share of a dwindling pie. In recent airline bankruptcies, labor groups have disputed airlines’ right to cancel collective bargaining agreements and terminate defined benefit pension plans while airlines have challenged creditors. For example, United Airlines has been involved in litigation with its flight attendants over its termination of their pension plan and with a group of aircraft lessors over their aircraft repossessions during its current bankruptcy. On October 17, 2005, the first major overhaul of the nation’s bankruptcy laws in 9 years will become effective. Many provisions of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 apply to consumer bankruptcies, but several important provisions apply to corporate bankruptcies. Some of these provisions may induce distressed airlines to seek bankruptcy before the new law takes effect while other provisions may provide more advantages to airlines in bankruptcy. The mid-September Delta and Northwest bankruptcy filings may be an indication that these carriers were seeking to avoid some portions of the new bankruptcy law. First, the 2005 law limits the “exclusivity period” for the debtor to file a reorganization plan to 18 months after the bankruptcy filing. Currently, the debtor has the first 120 days to file a plan, and can obtain numerous extensions. The new limit will not force liquidations but will give other parties an opportunity to file a competing plan somewhat sooner, thereby limiting the debtor’s “exclusive period” of control of the business. One bankruptcy expert we spoke with indicated that this change would not affect the majority of business bankruptcies, since most are concluded within 180 days. However, because airline bankruptcies tend to take longer than those in many other industries, this change may induce airlines considering bankruptcy to file before October 17, 2005. Second, the new law eliminated two subsections of the code—365(c)(4) and 365(d)(5)-(9)—that limited bankrupt airlines’ options when assuming or assigning terminal and gate leases. This change in the law will favor airlines that control gates and leases, because they will have the potential to realize greater value from these assets when in bankruptcy. Third, the 2005 act increases the time limits on assuming or rejecting unexpired commercial and real property leases but limits extensions. Under the current code, the debtor has 60 days from the commencement of the case to assume or reject commercial real property leases, and this time is often extended by the bankruptcy court. The 2005 act increases the initial decision period to 120 days but allows for only one extension (of up to 90 days) after that. Therefore, debtors will have a maximum of 210 days from the commencement of the bankruptcy case to make a decision on these leases. The court may grant a subsequent extension only upon prior written consent of the lessors in each instance. In addition, the new law expands the grounds on which a chapter 11 case may be converted to chapter 7 and increases the circumstances under which a chapter 11 trustee may be appointed. The act also encourages fast- track chapter 11 cases by making it easier for debtors to implement prearranged plans. Finally, the new law regulates the circumstances for approval of key employee retention plans and related severance payments by requiring that (1) the debtor establish that the bonus is essential to retain the employee, (2) the employee have a bona fide job offer, and (3) the debtor prove that the employee’s services are essential to the survival of the company. Additionally, these bonuses and severance packages are linked to those that are paid to nonmanagement employees. This provision also might induce pre-October 17, 2005, airline bankruptcy filings. Airline bankruptcies can differ notably from bankruptcies in other industries along a number of dimensions. However, it is hard to determine whether the differences are directly attributable to the unique sections of the bankruptcy code specific to airlines or are the result of factors unique to the airline industry. Airline bankruptcies can take a long time to resolve. According to our analysis of the Bankruptcy Research Database, airline bankruptcies ranked fifth in overall duration (averaging 714 days), behind bankruptcies in such industries as water transportation and petroleum refining, and lasted significantly longer than the average for bankruptcies in all of the industries in the database, which was 518 days. (See fig. 6). Airlines in bankruptcy also appeared to retain assets better than other industries, but at the cost of much greater debt; however, a limited number of observations precludes firm conclusions. According to available data for 19 of the top 50 bankruptcies since 1970, which involved 3 airlines and 16 other companies, the airlines’ assets were 0.8 percent lower on average after bankruptcy, while the other companies’ assets were 47.2 percent lower on average. At the same time, the airlines’ liabilities decreased 32.1 percent while the liabilities of companies in the other industries decreased 56.9 percent. Outcomes also differed for airline and other industry bankruptcies, according to Bankruptcy Research Database. The airlines were more likely than the other industries in our analysis to liquidate. (See fig. 7.) However, airlines are also more likely than other industries to start bankruptcy in chapter 11, which may account for their greater tendency to liquidate once in chapter 11. For each group, a majority of the companies had reorganization plans confirmed by the court (i.e., the companies had exited or emerged from bankruptcy), though for airlines this majority was smaller because of the larger percentage of liquidations. Our analysis of the Bankruptcy Research Database also revealed no discernable difference between airlines’ and other industries’ likelihood of reentering bankruptcy within 5 years. The rates at which airlines and other industries filed again for bankruptcy were just under 15 percent. However, these rates should be accepted with some caution and perhaps viewed as conservative because the database captured only refilings that occurred within 5 years and excluded companies with assets of less than $100 million. As a result, filings by companies not meeting one or the other criterion were not counted. There is no clear evidence that airlines in bankruptcy are harming the industry or their rivals or that bankruptcy is a panacea for airlines seeking an easy path to profitability. Some have asserted that protecting airlines in bankruptcy, rather than forcing liquidation, contributes to overcapacity in the industry. They further contend that bankrupt airlines underprice their rivals, hurting the financial well-being of healthier competitors. We found no evidence to support either contention and some evidence to the contrary. For example, despite many airline liquidations since deregulation in 1978, some of which were quite large, industry capacity has continued to grow unabated thanks to the growth of existing airlines and new entrants, often using the just-liquidated airline’s planes. We also found that capacity rebounded quickly in individual markets that experienced the liquidation or retreat of a significant airline, as other carriers quickly expanded capacity to compensate with little or no increase in overall average fares. Several studies have also found that airlines in bankruptcy have not reduced fares and did not harm rival airlines financially. Bankruptcies are not a panacea for airlines, as some might believe. Bankruptcy entails significant costs, loss of management control, and damaged relations with employees, investors, and suppliers. Of the 162 airlines that have filed for bankruptcy, 142 (88 percent) are no longer in operation. Contrary to some assertions, we found no evidence that bankruptcy protection has led to overcapacity and lower fares that have harmed healthy airlines, either in individual markets or in the industry overall. In 1993, a national commission to study airline industry problems cited bankruptcy protection as a cause for the industry’s overcapacity and fare problems. Airline executives have also cited bankruptcy protection as a reason for industry overcapacity and low fares. However, we found no evidence to support these views and some evidence to the contrary. Notably, both in individual markets and industrywide, the liquidation of major airlines has had only a very temporary or negligible effect on capacity, as other airlines have quickly replenished capacity. In part, this short-term effect can be attributed to the fungibility of aircraft and the notion that industry capacity is determined by the entire aviation supply chain and not solely by individual airlines. Finally, separate academic studies have found that airlines in bankruptcy have not lowered their fares or harmed the financial standing of their rivals. Both a national commission and airline executives have asserted, but without specific evidence, that bankruptcy protection allows airlines to avoid liquidation, thus contributing to industry overcapacity and underpricing that harms bankrupt carriers’ rivals. According to a 1993 report by the National Commission to Ensure a Strong Competitive Airline Industry, one of the causes of the industry’s financial problems was bankrupt airlines. Industry executives and some publications have gone further, stating that bankrupt airlines damage the entire industry. For example, a former Chairman of American Airlines asserted that bankrupt airlines contribute to industry overcapacity and are able to underprice rivals by virtue of their bankruptcy protection. However, very little evidence has been cited in any of these claims. In 1993, we testified that claims and counterclaims concerning the underpricing of bankrupt airlines had not been substantiated or considered in a larger context. There is little evidence that bankruptcy protection has contributed to industry overcapacity, at least in the long term. If it did, then some evidence that liquidation permanently removes capacity from the market should also exist. All indications are that this has not occurred. For example, industry capacity, as measured by available seat miles (ASM), grew two and one-half times from 1978 through 2004. Growth has slowed or declined just before and during recessions, but not as a result of large airline liquidations (see fig. 8). Capacity has continued to grow despite liquidations for a variety of reasons, including the fungibility of aircraft and the ease of entry, but ultimately capacity in any industry can be traced to the flow of capital into and out of the industry. For the airline industry, in which the chief asset (aircraft) is easily resold (fungible) and heavily leveraged, capital flows have supported the continued expansion of capacity even during industry downturns. Except for government subsidies to airlines or manufacturers, capital would flow to airlines only if the providers of that capital received a return on their investments. Evidence suggests that capital providers have profited and helps explain why airlines in bankruptcy continue to receive substantial capital support from other members of the value chain. Experts have espoused the notion of the value chain in understanding the role of companies in an industry. In the airline industry, the value chain includes aircraft and engine manufacturers, such as Boeing, General Electric, and Airbus; lessors, such as GE Commercial Aviation Service and International Lease Finance Corporation; global ticket distribution systems, like Sabre and Worldspan; credit card companies; airports; suppliers; and others. There is considerable evidence that these other members of the value chain have earned a good return on capital while airlines have not (see figs. 9 and 10). Those companies further up the value chain face less competition and are able to impose higher costs on airlines. Accordingly, these companies have a vested interest in ensuring that airlines survive and that capacity not leave the industry. Data from sources of financing to airlines that are in bankruptcy or financial trouble provide some evidence of the vested interests of value chain members in keeping troubled airlines alive. Table 3 lists some of the major injections of capital into airlines since 2004. Our research indicates that the departure or liquidation of a carrier from a market does not necessarily lead to a long-term decline in local traffic (i.e., that which originates at or is destined for the particular airport) for that market. We contracted with InterVISTAS-ga2, an aviation consultant, to examine traffic to and from six cities that experienced the departure or significant withdrawal of service of an airline (see table 4). In most cases, while total capacity and passenger traffic decreased, the reduction was largely attributable to the loss of connecting passenger traffic from the departing carrier. There was little diminution in local passenger traffic for most of these markets because other carriers increased their capacity to replace the departing carrier’s capacity. This research provides further evidence that demand drives capacity and that the departure of a carrier due to bankruptcy or a change in market strategy does not lead to a long- term decline in capacity. Appendix II contains additional detailed information on each case study. A major study of airline bankruptcies’ effects on air service also found that bankruptcy generally does not harm individual airline markets. This April 2003 study examined all major chapter 11 bankruptcies from 1984 through 2001 to determine if and how they affected air service. The study found that the effect of bankruptcies on large and small airports was insubstantial and not separable from normal fluctuations in air traffic. However, for medium-sized airports, the study found the bankruptcy of an airline with a significant share of flights reduced service by amounts that were statistically significant. Two major academic studies have found that airlines under bankruptcy protection do not lower their fares or hurt competitor airlines, as some have contended. A 1995 study found that an airline typically reduces its fares somewhat before entering bankruptcy. However, the study found that other airlines do not lower their fares in response and, more important, do not lose passenger traffic to their bankrupt rival and therefore are not harmed by the bankrupt airline. Another study came to a similar conclusion in 2000, this time examining the operating performance of 51 bankrupt firms, including 5 airlines. Rather than examine fares as did the 1995 study, this study examined the operating and financial performance of bankrupt firms and their rivals. The study found that the performance of a bankrupt firm deteriorates before the firm files for bankruptcy and its rivals’ profits also decline during this period. However, once the firm is in bankruptcy, its rivals’ profits recover. With very few exceptions, airlines that entered bankruptcy did not emerge from it. Many of the advantages of bankruptcy stem from the legal protection afforded the debtor airline from its creditors, but this protection comes at a high cost in loss of control over airline operations and damaged relations with employees, investors, and suppliers. Bankruptcy involves many costs for airlines that file. The financial costs include the consultant and legal fees of managing a lengthy bankruptcy. For example, United, which filed for bankruptcy in December 2002, had spent nearly $260 million in legal fees as of June 2005. A study of bankruptcy fees found that large companies generally spend an average of 2.2 percent of their assets on legal fees while in bankruptcy. The fees for United are high for a company of its size, and they are rising as the company continues to operate under chapter 11. These fees, thus far, make United’s bankruptcy the seventh most costly bankruptcy of all time. Bankruptcy also wipes out shareholders’ equity, which may mean significant losses for the original owners, and leaves them without a financial interest in the company. Finally, airlines in bankruptcy do not immediately receive all the cash from credit card ticket sales because credit card companies protect themselves against liquidation by withholding a large percentage of receipts until travel is actually taken. For the cash-flow-intensive airline business, this wait is difficult. In addition to financial costs, there are many negative factors to be considered by firms filing for bankruptcy. Notably, airline officials told us, loss of control over the airline’s operations can be significant, because the courts must approve important changes, such as sales of assets or significant changes in fare structures or schedules. Rival airlines are able to learn of strategic changes well before they may occur. There may also be damage to public and customer perceptions of the airline. Finally, bankruptcy damages, sometimes permanently, relations with employees if they are made to bear a significant portion of the bankruptcy costs. In other cases, an airline may suffer a “brain drain” when its most talented employees seek employment elsewhere. Very few airlines have emerged from bankruptcy and are still operating. Many others have gone out of business through liquidation or merger. Of the 162 airline bankruptcy filings by 142 different airlines since 1978, 148 were for chapter 11 reorganization and 14 were for chapter 7 liquidation (see table 5). Of the 148 chapter 11 reorganization filings, in only 18 cases does the airline still hold an operating certificate from the Federal Aviation Administration (FAA). Market factors, management-labor decisions, and pension law provisions have played a role in airline pension underfunding of approximately $13.7 billion, with an estimated $10.4 billion in minimum funding requirements due from 2005 through 2008 as a result. These pension obligations contribute to the liquidity problems faced by legacy airlines that still operate pension plans, and may help cause additional airlines to declare bankruptcy. Remaining airline pensions expose PBGC to $23.7 billion in unfunded pension obligations and would result in significant benefit reductions to participants if their pension plans are terminated. PBGC has taken over a combined $24.9 billion in pension obligations from US Airways and United within the last 3 years, at a cost of over $9.7 billion to the agency. While eliminating or easing pension plan obligations may help ease legacy airlines’ immediate liquidity pressures, they do not eliminate the structural cost imbalance between legacy and low cost airlines, or guarantee that the legacy airlines will avoid bankruptcy. Pension reform proposals—including extending payment time frames, changing premium rules, and using a yield curve to calculate liabilities—would have differential effects among airlines and implications for PBGC. Airline defined benefit pensions are underfunded by approximately $13.7 billion, according to airline financial reports filed with SEC. This underfunding is down from $21 billion at the end of 2004 as a result of the termination and transfer of US Airways’ remaining pension plans and all of United’s pension plans to PBGC. Under existing law, minimum pension contribution requirements for the remaining legacy airlines that still operate plans are estimated to be at least $10.4 billion from 2005 through 2008. These minimum contribution requirements contribute to airline liquidity problems. Estimates suggest the combined costs of the minimum pension contribution requirements, long-term debt, capital leases, and operating leases will exceed available cash. The magnitude of legacy airlines’ future pension funding requirements is attributable to the size of the pension shortfall that has developed since 2000. As recently as 1999, airline pensions were overfunded by $700 million, according to SEC filings; by the end of 2004, legacy airlines reported a deficit of $21 billion (see fig. 11), despite the termination of the US Airways pilots’ plan in 2003. Since these filings, the total underfunding has declined to approximately $13.7 billion, in part because of the termination of the remaining US Airways plans and all of the United plans. The extent of pension underfunding varies significantly by airline. At the end of 2004, before terminating its pension plans, United reported underfunding of $6.4 billion, an amount equal to over 40 percent of its total operating revenues in 2004. In contrast, Alaska reported pension underfunding of $303 million at the end of 2004, equal to 13.5 percent of its operating revenues. Since United terminated its pension plans, Delta and Northwest have the most significant pension funding deficits—over $5 billion and nearly $4 billion, respectively—which represent about 35 percent of each airline’s 2004 operating revenues (see fig. 12). PBGC released estimated after Delta and Northwest declared bankruptcy on September 14, 2005, stating that on a termination basis Delta’s defined benefit plans were underfunded by $10.6 billion, while Northwest’s underfunding totaled $5.7 billion. Under current law, companies whose pension plans fail certain funding benchmarks and are underfunded by more than 10 percent on a current liability basis must make deficit reduction contributions (DRC), in addition to other contributions, to remedy the underfunding. Minimum contribution requirements, including DRCs, are estimated to total a minimum of $10.4 billion from 2005 through 2008. These estimates assume the expiration of the Pension Funding Equity Act (PFEA) at the end of this year. PFEA permitted airlines to defer the majority of their DRCs in 2004 and 2005. If this legislation is allowed to expire at the end of 2005, payments due from legacy airlines will significantly increase in 2006. According to PBGC data, legacy airlines are estimated to owe a minimum of $1.5 billion this year, nearly $2.9 billion in 2006, $3.5 billion in 2007, and $2.6 billion in 2008 (see fig. 13). Declines in pension plan assets from investment losses and low interest rates have been significant factors in current pension underfunding. Airline pension asset values dropped nearly 15 percent from 2001 through 2004 because of the decline in the stock market, while future obligations have steadily increased because of (1) declines in the yields on the fixed-income securities used to calculate the liabilities of plans, and (2) new benefit accruals. Management and labor decisions increased pension obligations in profitable years, but much less was contributed to the pension funds than could have been. In addition to these factors, pension funding rules have not prevented plans from becoming significantly underfunded. Even though U.S. Airways and United Airlines were in full compliance with the minimum funding rules for pension plans prior to bankruptcy, their plans, in aggregate, were underfunded by nearly $15 billion at termination. Pension asset values for legacy airlines reached a high in 2000 of $35.8 billion. Investment returns turned negative in 2001 and caused the value of airline pension assets to decline. By 2002, the value of legacy airline pension assets dropped to $26.2 billion—a loss of over $9 billion (26.7 percent). By 2004, pension asset values recovered to $30.4 billion, about 15 percent below the high in 2000 (see fig. 14). If PBGC takes over an underfunded plan after it has been terminated, the plan’s liabilities and assets are transferred to PBGC. If the plan’s assets are insufficient to cover the plan’s liabilities, PBGC, and sometimes plan participants, must assume the loss. While the Employment Retirement Income Security Act provides some standards of conduct for the plan sponsor’s investment practices, the sponsor’s chosen plan fiduciary has discretionary control over the management of plan assets. We did not examine the investment practices of airlines or other companies; however, one union has suggested that airline investment practices may have contributed to plan failure and has requested that PBGC conduct an audit to ensure the integrity of asset investment practices. PBGC, however, does not have the authority to conduct this type of audit; this responsibility falls under the authority of the Department of Labor. The decline in key interest rates compounded the loss in asset value by increasing the value of pension liabilities. Interest rates are critical factors in calculating the level of plan assets needed today in order to fulfill promised benefits. When interest rates are lower, projected returns on assets are lower, requiring more money to be invested today to finance promised future benefits. At a 6-percent interest rate, for example, a promise to pay $1 per year for the next 30 years has a present value of $14. If the interest rate is reduced to 1 percent, however, the present value of the promise to pay $1 per year for the next 30 years increases to $26. Bond yields underpinning the interest rates used to calculate pension liabilities on a current liability basis have been trending lower since the early 1980s, causing the value of future liabilities to grow. Until 2004, the interest rate used to calculate liabilities on a current liability basis was based on the 30-year Treasury bond rate. PFEA changed the basis of this interest rate from the 30-year Treasury bond rate to a composite index of high-grade corporate bonds for years 2004 and 2005. As figure 15 shows, the two rates track each other fairly closely, but the 30-year Treasury rate is lower. In addition to market forces, decisions made by management and labor have increased pension liabilities. Although management and labor unions have agreed to a number of changes to collective bargaining agreements that have limited pension and other benefits in recent years, labor agreements have also increased pension liabilities in a number of instances since the late 1990s. In some instances, pension benefits increased beyond what financially weak airlines could reasonably afford. For example, in the spring of 2002, United’s management and mechanics reached a new labor agreement that increased the mechanics’ pension benefit by 45 percent, but the airline declared bankruptcy the following December. In addition, legacy airlines have funded their pension plans far less than they could have, even during the airlines’ profitable years. PBGC examined 101 cases of airline pension contributions from 1997 through 2002 and found that while airlines made the maximum deductible contribution in 10 cases, they made no contributions in 49 cases when they could have contributed. When airlines did make tax deductible contributions, the contributions were often far less than permitted. For example, in 2000, the airlines PBGC examined could have made a total of $4.2 billion in tax- deductible contributions, but they contributed only about $136 million despite recording profits of $4.1 billion (see fig. 16). PBGC has taken over a number of pension plans that have been substantially underfunded even though their sponsors were in full compliance with the minimum funding requirements. Existing laws governing pension funding and premiums have not protected PBGC from accumulating a significant long-term deficit and have not minimized the impact of PBGC’s exposure to the moral hazard arising from insuring pension plans. The minimum funding rules depend on the plan sponsor being in good financial health and continuing operations indefinitely; the rules do not ensure that the plan sponsor will have the means to meet the plan’s benefit obligations if the plan sponsor meets financial distress. Meanwhile, in the aggregate, premiums paid by plan sponsors under the pension insurance system have not adequately reflected the financial risk to which PBGC is exposed. Accordingly, defined benefit plan sponsors, acting within the rules, have been able to turn significantly underfunded plans over to PBGC, thereby creating PBGC’s current deficit. This section addresses three aspects of the rules—the current liability measure, the use of credit balances in meeting funding requirements, and PBGC’s premium structure. The current liability measure, which measures the value of a plan’s accrued liabilities to date for funding purposes, may provide an overly optimistic picture of a plan’s financial status and the sponsor’s ability to fulfill its obligations. Such a picture is possible because the current liability measure tacitly assumes, among other things, that the plan and its sponsor are financially healthy, viable entities. For a plan whose sponsor is in financial trouble, a more conservative measure, the termination liability, is likely to present a more realistic picture of the liabilities the plan has accrued to date. From 1998 through 2002, airline pensions were consistently funded above 90 percent on a current liability basis. By that measure, the plan sponsors were not required to make contributions because the “full funding limitation” exemption applied. In contrast, the funding status of airline pensions on a termination basis during this time was under 90 percent in each year except 2000, with a spread of more than 25 percent between the two measures in 2003. Figure 17 illustrates the difference in aggregate funding status shown by each measure. The result is that pensions often are significantly more underfunded when plans are terminated than the current liability measure indicates. US Airways’ and United Airlines’ recent pension plan terminations illustrate this point. When these airlines terminated their pension plans, the plans’ combined benefit liability was $24.9 billion. Combined assets in the funds totaled $10 billion—a 60 percent shortfall. The ability of sponsors to use funding credits to fulfill minimum contribution requirements has also contributed to pension plan underfunding. Plan sponsors accumulate funding credits when they contribute more than the minimum contribution requirement in a plan year or when the plan’s actual experience, including investment returns on assets, exceed expectations; these credits can then be substituted in later years for cash contributions. In this way, funding credits can act as a buffer against potentially volatile funding requirements and allow sponsors flexibility in managing their annual level of pension contributions. If the market value of a plan’s assets declines, however, the value of funding credits may be significantly overstated. This overstatement occurs because credits are not measured at their market value and are credited with interest each year. For example, a sponsor can accrue a $1 million credit by making a $1 million contribution above the minimum contribution requirement. Even if the $1 million in assets loses all value in the following year, the $1 million credit balance remains and may be used as a credit toward the plan’s minimum contribution requirement. In addition, the sponsor would have to report only a portion of that lost $1 million in asset value as a plan charge the following year because of smoothing rules that allow losses to be amortized over multiple years. Over the past 5 years, airlines have used funding credits to fulfill minimum contribution requirements despite significant levels of pension underfunding. For example, starting in 2000, United used funding credits to avoid making cash contributions to its pilots’ plan, even though the true funded status of the plan had deteriorated. The plan was only 50 percent funded at termination. Similarly, US Airways avoided contributing cash to its pilots’ plan by applying funding credits to fulfill its minimum contribution requirements. At termination, this plan was only 33 percent funded. Finally, the premium structure in PBGC’s single-employer pension insurance program does not reflect the agency’s exposure to financial risk. Although PBGC premiums may be partially based on plan funding levels, they do not consider other relevant risk factors, such as the economic strength of the sponsor or the plan’s asset investment strategies, benefit structure, or demographic profile. The current premium structure relies heavily on flat-rate premiums, which are unrelated to risk. PBGC also charges plan sponsors a variable-rate premium based on the plan’s level of underfunding; however, underfunded plans are not required to pay this premium if they satisfy the full funding limit or another exemption. In addition, current pension funding and pension accounting rules— especially those that permit assets to be smoothed rather than valued at their market rate—may encourage sponsors to invest in riskier assets and potentially benefit from higher expected long-term rates of return. In determinations of funding requirements, a higher expected rate of return on pension assets means that a plan needs to hold fewer assets to meet its future benefit obligations. Under current accounting rules, the greater the expected rate of return on plan assets, the greater the plan sponsor’s operating earnings and net income. However, higher expected rates of return require riskier investments that lead to greater investment volatility and risk of losses. Estimated minimum pension contribution requirements of $10.4 billion over the next 4 years, combined with other fixed obligations, threaten the liquidity position of the remaining legacy airlines with pension plans. As a result, some airlines have suggested they will be forced to declare bankruptcy and terminate their pension plans if they are not granted some form of pension relief. Pension plan terminations often result in significant benefit cuts to participants and cost PBGC billions. When United and US Airways terminated their pension plans and transferred $19.6 billion in pension obligations to PBGC, participants lost a total of $5.3 billion in benefits, and PBGC incurred costs of $9.7 billion to cover the gap between guaranteed benefits and available assets. Remaining airline pension plans expose PBGC to an additional $23.7 billion in unfunded benefit obligations. Although pension plan terminations provide airlines with significant liquidity relief in the near term, these terminations alone will not make legacy airlines cost competitive with low cost airlines, which offer 401(k)-type defined contribution plans. The size of legacy airlines’ future fixed obligations (including pensions, long-term debt, and capital and operating leases) relative to their financial position suggests these airlines will have trouble meeting their various financial obligations, regardless of whether they terminate their pension plans. Legacy airlines’ fixed obligations in each year from 2005 through 2008 significantly exceed the total year-end 2004 cash balances of these same legacy airlines. Legacy airlines carried a combined cash balance of just under $10 billion going into 2005 (see fig. 18) and have used cash to fund their operating losses. These airlines’ fixed obligations are estimated to be over $15 billion in both 2005 and 2006, over $17 billion in 2007, and about $13 billion in 2008. Fixed obligations exceed total year-end 2004 cash by an average of $2.7 billion during this time even when pension obligations are not included. While cash from operations can fund some of these obligations, continued losses and the size of these obligations put these airlines in a sizable liquidity bind. Fixed obligations in 2008 and beyond will likely increase as payments due in 2006 and 2007 may be pushed out and as new obligations are assumed. If these airlines continue to lose money this year, as analysts predict, their position will become even more tenuous. Nor will easing required pension contribution requirements fix the legacy airlines’ underlying structural cost disadvantage. Pension costs, while substantial, are only a small portion of legacy airlines’ overall unit costs. The cost of legacy airlines’ defined benefit plans accounted for approximately 0.4 cent per available seat mile, a 15 percent difference between legacy and low cost airline unit costs (see fig. 3). The remaining 85 percent of the unit cost differential between legacy and low cost airlines is attributable to factors other than defined benefit pension plans. Furthermore, even if legacy airlines terminated their defined benefit plans, this portion of the unit cost differential would not be fully eliminated because, according to PBGC staff and industry labor officials we interviewed, other plans would replace the defined benefit plans. The cost to PBGC and participants of defined benefit pension plan terminations has grown in recent years as the level of pension underfunding has deepened (see table 6). When Eastern Airlines defaulted on its pension obligations of nearly $2.2 billion in 1991, for example, the net claim against PBGC totaled $701 million. By comparison, US Airways’ and United’s pension plan terminations cost PBGC $9.7 billion in combined claims against the agency. The remaining legacy airlines’ defined benefit plans expose PBGC to billions more in potential losses. At the end of 2004, these legacy airlines reported $23.7 billion in total termination liabilities for their defined benefit plans, with assets to cover 48 percent of these obligations. When US Airways and United terminated their pension plans, active and high-salaried employees generally lost more of their promised benefits than did retirees and low-salaried employees because of statutory limits. For example, PBGC generally does not guarantee benefits above a certain amount, currently $45,614 annually per participant retiring at age 65. For participants who retire before age 65, the guaranteed benefit amounts are less; for instance, participants who first receive benefits from PBGC at age 60 are guaranteed benefits of $29,649. Commercial pilots often end up with substantial benefit cuts when their plans are terminated because, according to PBGC, their benefits generally exceed PBGC’s maximum guaranteed amount. In addition, if they elect to begin receiving benefits from PBGC at age 60—the age at which FAA requires pilots to retire from operating commercial service flights—their benefits are cut further. While the loss of a defined benefit plan can be substantial for pilots, they typically have additional and sometimes sizable retirement plans, such as 401(k) plans, that supplement their pension plans. Nonpilot retirees are not as often affected by the maximum payout limits. For example, at US Airways, fewer than 5 percent of the retired mechanics and flight attendants faced benefit cuts when their pension plans were terminated. Retirees generally fare better than active employees because they receive higher priority when PBGC allocates existing assets at plan termination. For example, PBGC estimates that the pension benefits of all United’s active ground employees will be cut, with 71 percent of these employees facing estimated cuts of between 1 percent and 25 percent. Of United’s retired ground employees, an estimated 39 percent will face benefit cuts; of these retired employees, an estimated 93 percent will see reductions of between 1 to 25 percent. Tables 8 and 9 summarize the expected cuts in benefits for different groups of United’s active and retired employees. In addition to reducing pension plan benefits, airlines have made significant cuts to active employees’ health care benefits. For example, American Airlines increased its active pilots’ monthly contributions for family health care coverage by 162 percent and began to require contributions by disabled pilots for health care coverage. Before 2003, United’s ramp service employees did not have to make monthly contributions for family health care coverage; however, these employees now must contribute $173 a month for their coverage. While active employees’ health benefits have been cut, retirees’ health care plans have not changed significantly. Union officials said that reductions in retirees’ health care benefit would not produce the savings sought by the airlines and were not considered foremost during contract negotiations. The decline of PBGC’s financial condition, the expiration of PFEA at the end of the year, and pension plan terminations at US Airways and United have prompted congressional consideration of various reform proposals for defined benefit pensions. Currently, the three most prominent proposals are the administration’s plan; H.R. 2830, “The Pension Protection Act of 2005;” and S. 219, “The National Employee Savings and Trust Equity Guarantee Act of 2005.” All three are broad reform proposals that seek to strengthen the defined benefit pension system in the long term and attempt to resolve fundamental problems with the system, as highlighted in this report and other GAO reports. For example, all three proposals contain, among others, provisions that a) modify the measurement of pension assets and liabilities, b) increase the premiums paid to PBGC, c) restrict lump-sum distribution provisions, and d) adjust disclosure requirements. From the airlines’ perspective, an important difference among the bills concerns the length of time over which they can amortize the large minimum contribution requirements currently due over the next 4 years. The administration’s proposal and H.R. 2830 would use a 7-year payment period. According to a document issued by the Joint Committee on Taxation, S. 219 would extend the amortization payment period to 14 years, but only for airlines that “freeze” their defined benefit plans. Table 9 suggests how this provision could significantly reduce the airlines’ minimum contribution requirements in 2006. Amortizing these obligations over 14 years would have an immediate impact on the airlines’ liquidity. The rationale for extending the amortization period is that unless airlines receive funding relief, existing minimum contribution requirements may have such an adverse effect on their liquidity that they will be forced into bankruptcy. The airlines then could terminate their pension plans and transfer billions in obligations to PBGC. To prevent such terminations, according to the Joint Committee on Taxation, S. 219 would decrease the required annual contribution by allowing the airlines to extend their payments over a longer period. Requiring the airlines to “freeze” their existing plans is designed to limit PBGC’s exposure in case the airlines cannot recover financially and terminate the plans before fully funding them over the 14-year period. Although extending the amortization period would provide some liquidity relief to the remaining legacy airlines with defined benefit plans, it would not solve those airlines’ overall financial problems, and the extent to which it would limit PBGC’s exposure to additional pension liabilities is unclear. As shown in figure 18, pension obligations are only part of a much larger set of fixed obligations through 2008. Given these other fixed obligations and persistent high fuel prices, pension relief alone will not solve those airlines’ financial problems, nor can it guarantee that airlines will not declare bankruptcy in the future. Furthermore, there is no assurance that PBGC’s financial exposure will be limited. According to a summary by the Joint Committee on Taxation, S. 219 requires pensions to be frozen for the extended amortization period to apply; however, liabilities could still increase. For example, liabilities may increase with salary increases for existing participants because pension benefits are based on participants’ salaries. Even if liabilities are frozen, a plan’s assets could decrease, leaving PBGC with fewer assets to cover obligations. In the short term, extending the amortization period might prevent airline pension plan terminations, allow employees to collect more benefits than they might otherwise collect, and allow PBGC to avoid taking over plans that are significantly underfunded. In the long term, however, special treatment of airlines could potentially expose PBGC to even greater costs. After 27 years, deregulation continues to affect the structure of the airline industry. Dramatic changes in the level and nature of demand for air travel, combined with an equally dramatic evolution in how airlines meet that demand, have forced a drastic restructuring of the industry. Airlines have experienced greatly diminished pricing power since 2000. Profitability, therefore, depends on which airlines can most effectively compete on cost. This development has created inroads for low cost airlines and forced wrenching change on legacy airlines that long competed using a high-cost business model. The historically high number of airline bankruptcies and liquidations is a reflection of the industry’s inherent instability. However, these events should not be misinterpreted as a cause of the industry’s instability. There is no clear evidence that bankruptcy has contributed to the industry’s economic ills, including overcapacity and underpricing, and there is some evidence to the contrary. Equally telling is how few of the airlines that have filed for bankruptcy protection are still doing business. Clearly, bankruptcy has not afforded these companies a special advantage. Bankruptcy has become a well-traveled path by which some legacy airlines are seeking to shed some of their costs and become more competitive. However, the termination of pension plan obligations by US Airways and United Airlines has had substantial and widespread effects on PBGC and on thousands of airline employees, retirees, and other beneficiaries. The recent filings by Delta Air Lines and Northwest Airlines only exacerbate these concerns. Liquidity problems, including $10.4 billion in near-term pension contributions, may force additional legacy airlines to follow suit. Some airlines are seeking legislation to allow more time to fund their pensions. If their plans are frozen so that their liabilities do not continue to grow, allowing an extended payback period may reduce the likelihood that these airlines will file for bankruptcy and terminate their pension plans in the coming year. However, unless these airlines can reform their overall cost structures and become more competitive with low cost competition, this change will be only a temporary reprieve. We have previously reported that Congress should consider broad pension reform that is comprehensive in scope and balanced in effect. Revising plan funding rules is an essential component of comprehensive pension reform. For example, we recently testified that Congress should consider the incentives that pension rules and reform may have on other financial decisions within affected industries. Under current conditions, the presence of PBGC insurance may create certain “moral hazard” incentives—struggling plan sponsors may place other financial priorities above “funding up” their pension plans because they know PBGC will pay guaranteed benefits. Furthermore, because PBGC generally takes over underfunded plans of bankrupt companies, PBGC insurance may create an additional incentive for troubled firms to seek bankruptcy protection, which in turn may affect the competitive balance within the industry. We provided a draft of this report to DOT and PBGC for their review and comment. DOT and PBGC officials provided some technical and clarifying comments that we incorporated as appropriate. DOT declined to provide written comments, and PBGC’s written comments appear in appendix III. We also provided selected portions of a draft of this report to the Air Transport Association to verify the presentation of factual material. We incorporated their technical clarifications as appropriate. We are providing copies of this report to the Secretary of Transportation, the Executive Director of PBGC, and other interested parties and will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at 202-512-2834, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors are listed in appendix IV. To examine the role of bankruptcy in the airline industry, we drew on information from a variety of sources. We interviewed airline officials, representatives of airline trade associations, representatives of law firms with significant experience in representing different parties involved in airline bankruptcies, credit and equity analysts, academic experts, and private consultants. We reviewed relevant research obtained from these and other sources. We interviewed government experts from the Department of Transportation (DOT) and its agencies—the Federal Aviation Administration (FAA) and the Bureau of Transportation Statistics (BTS). To determine the financial state of the airlines and the extent to which airlines were able to reduce costs during bankruptcy, we analyzed DOT Form 41 data. We obtained these data from BACK Aviation Solutions, a private contractor that GAO has contracted with to provide DOT Form 41 and other aviation data. To assess the reliability of these data, we reviewed the quality control procedures applied to the data by DOT and BACK Aviation Solutions and subsequently determined that the data were sufficiently reliable for our purposes. To examine the prevalence and length of airline bankruptcies and make comparisons with other industries, we obtained data from two databases: New Generation Research’s bankruptcydata.com and Professor Lynn M. LoPucki’s Bankruptcy Research Database. To assess the reliability of these data, we reviewed the quality control procedures applied to each data source and subsequently determined that the data were sufficiently reliable for our purposes. To assess whether bankruptcies are harming the airline industry, we reviewed relevant research, interviewed experts, and analyzed historical data on bankruptcies. We interviewed airline officials, representatives of airline trade associations and law firms with significant experience in representing different parties involved in airline bankruptcies, airline industry credit and equity analysts, academic experts, and private consultants. We also reviewed relevant research obtained from these and other sources. In addition, we interviewed government experts from DOT, FAA, and BTS. We also contracted with InterVISTAS-ga2, a private consulting firm, to analyze changes in air service and fares at six hub cities where an airline exited or significantly reduced its service. The cities were Colorado Springs, Colorado; Columbus, Ohio; Greensboro, North Carolina; Kansas City, Missouri; Nashville, Tennessee; and St. Louis, Missouri. InterVISTAS-ga2’s analysis included an examination of changes in capacity (as measured by available seat miles, a common measure of the available capacity in a market) and in passenger traffic (from 4 quarters before to 8 quarters after the airline left a given market or significantly reduced its operations there). InterVISTAS-ga2 used DOT airline data for this analysis; we reviewed the quality control procedures InterVISTAS-ga2 and DOT applied to these data to assess their reliability and determined that they were sufficiently reliable for our purposes. To assess the effect of airline pension underfunding on employees, airlines, and the Pension Benefit Guaranty Corporation (PBGC), we relied on a variety of sources. We drew on an extensive body of work that we have completed on private pension issues. We also interviewed airline officials, representatives of airline trade associations and airline labor unions, airline industry credit and equity analysts, academic experts, and officials from PBGC, DOT, FAA, and BTS. We reviewed relevant research obtained from these and other sources. To examine the current and historical financial status of airline pensions plans, we reviewed data from PBGC (from Forms 5500 and 4010) and Securities and Exchange Commission (SEC) filings, including funding contributions, funding status, and estimated future funding contribution requirements. To examine the effect of pension funding requirements on the financial status and cost competitiveness of airlines, we analyzed DOT Form 41 data obtained from BACK Aviation Solutions. To assess the reliability of these data, we reviewed the quality control procedures applied to the data by DOT and BACK Aviation Solutions and subsequently determined that the data were sufficiently reliable for our purposes. We performed our work from September 2004 through September 2005 in accordance with generally accepted government auditing standards. For more in-depth information on what has occurred at hubs when carriers have significantly reduced their presence, we contracted with InterVISTAS- ga2, an aviation consulting firm, to collect and analyze data on changes in capacity, as measured in available seat miles (ASM), and traffic, including both local (origin and destination) and total traffic. During preliminary analysis and consultations, we screened out cases older than 10 years and eliminated others for which sufficient data were not available (thereby excluding, for example, the actions taken by US Airways at Pittsburgh in the latter half of 2004, because not enough time had passed to review these actions’ possible effects on the market). Consequently, we selected the following six cases for examination: Colorado Springs, Colorado—Western Pacific moved its operations to Denver (1997). Columbus, Ohio—America West eliminated its hub (2003). Greensboro, North Carolina—Continental Lite service was dismantled (1995). Kansas City, Missouri—Vanguard Airlines ceased service (2002). Nashville, Tennessee—American Airlines eliminated its hub (1995). St. Louis, Missouri—TWA was acquired by American Airlines (2001). To eliminate the effects of seasonality, changes were measured from 4 quarters before to 8 quarters after an event for a total of 12 quarters of data. We asked InterVISTAS-ga2 to provide us with benchmark industry data for the same periods. To determine changes in capacity and traffic, InterVISTAS -ga2 used data reported by airlines to DOT. InterVISTAS-ga2 calculated 4-quarter averages for each data element and determined percentage changes in these averages 1 and 2 years after the event. Because dehubbing, or withdrawing from a market, might occur over a period of time, however, there was no single “bright line” when the withdrawal occurred for most of these cases, so InterVISTAS -ga2 determined that the effective quarter of the withdrawal was generally the quarter with the greatest downturn in traffic. To determine whether a destination received service from a hub, we obtained and reviewed the number of departures reported to DOT for the first 4 quarters and the last 4 quarters of the period under review for each hub city and for each carrier. If a destination received at least 80 departures in a quarter from any one carrier (roughly the equivalent of daily service, allowing for less service on weekends), we counted it as having received service. To determine whether small community destinations suffered losses of service when these hub cities were deemphasized, we assigned hub sizes to community airports on the basis of the Federal Aviation Administration’s (FAA) hub designation list for the corresponding calendar year. We defined small community airports as small and nonhub airports that are not located in major metropolitan areas. Colorado Springs served as the hub for Western Pacific Airlines, a low fare airline that flew medium-haul routes from April 1995 to June 1997. By June 1995, the airline was flying an average of 14 departures daily. Western Pacific chose Colorado Springs because it believed the airport could be an effective alternative to Denver International. In June 1997, Western Pacific, which was then operating 32 departures daily from Colorado Springs, left Colorado Springs to establish a hub at Denver. However, the airline filed for chapter 11 bankruptcy protection on October 5, 1997, and shut down in February 1998. Western Pacific’s departure from Colorado Springs in June 1997 resulted in significantly lower capacity and traffic. When Western Pacific left, a significant amount of capacity was taken from the market, resulting in decreased total traffic. (See fig. 19.) Local traffic also decreased significantly, by 43.6 percent. No small communities had received nonstop service out of Colorado Springs during this period, so none were directly affected by Western Pacific’s move to Denver. (See fig. 20.) America West began service at Columbus, Ohio, in December 1991—6 months after its June 1991 chapter 11 bankruptcy filing—with 5 daily departures. During February 2003, America West announced its plans to eliminate the Columbus hub operations. At that time, America West mainline was operating 9 daily departures out of Columbus. The airline reported the hub had lost $25 million annually and indicated that the elimination of the hub was part of America West’s response to difficult economic conditions. By February 2004, America West mainline was operating 4 daily departures from Columbus. The elimination of America West’s hub operations at Columbus, Ohio, had little effect, since the carrier’s mainline had captured less than 15 percent of total traffic before it withdrew. Therefore, decreases in capacity and increases in total traffic were negligible. Total traffic increased slightly overall because Southwest was increasing its capacity. (See fig. 21.) However, this increase did not offset the 4.2 percent decline in local traffic. No small communities were served nonstop out of Columbus by America West mainline. (See fig. 22). Greensboro was one of the focus cities for Continental’s point-to-point, short-haul, no-frills, low-fare “Continental Lite” (CALite) service initiated in the eastern United States in October 1993. Continental quickly ramped up service from 3 departures per day to a high of 74 per day by September 1994. However, after operational problems and financial losses, Continental decided to dismantle the CALite service in 1995. In June 1995, the airline was offering 52 daily departures from Greensboro. By June 1998, Continental had reduced that number to 6. Dismantling the CALite service resulted in less overall capacity and traffic at Greensboro. Greensboro’s overall capacity decreased despite capacity increases by other airlines. Total traffic decreased nearly 30 percent with the reduction of the CALite service. (See fig. 23.) Local traffic decreased 10.7 percent. Continental served 21 markets nonstop before it dismantled the Greensboro hub; four of these were small community markets. After the airline decreased its capacity at Greensboro, it continued nonstop service to its three hubs but cancelled nonstop service to the small communities. (See fig. 24.) Vanguard Airlines began operating in 1994 as a low fare carrier and eventually operated a hub in Kansas City, Missouri, with 2 departures daily. Vanguard eventually served 13 percent of the passengers in Kansas City. On July 30, 2002, the airline ceased operations and filed for chapter 11 bankruptcy protection after being denied a federal loan guarantee by the Air Transportation Stabilization Board. When the company stopped operating, it had been flying 33 departures daily out of Kansas City. When Vanguard abruptly exited the Kansas City market, overall capacity and thus traffic declined somewhat. Vanguard had a 13 percent market share to Southwest’s 36 percent share, and Southwest had cut its capacity out of Kansas City during the same period while overall other carriers had increased their capacity slightly. (See fig. 25). Local traffic decreased 6.8 percent. Vanguard served only one small community at the time it exited Kansas City, and during the period of our review no other carriers served that community from Kansas City, so one small community lost air service to Kansas City as a result of Vanguard’s demise. (See fig. 26). Nashville was one of six American Airlines hubs. The airline opened the hub in April 1986, and at its peak in January 1992, it operated 135 daily departures out of Nashville. In December 1994, just before it started dismantling the Nashville hub, it reduced daily departures to 80. By December 1996, American had further reduced its service at Nashville to 22 daily departures. When American dismantled its Nashville hub, overall capacity and total traffic declined. Other airlines increased their capacity and their traffic substantially when American decreased its service. However, because American had been so dominant at Nashville, a small decline in overall traffic occurred. (See fig. 27.) Local traffic, however, increased 28 percent. Southwest increased its share of Nashville’s traffic from 13 percent the year before American pulled out to 33 percent 2 years later. When American Airlines dehubbed at Nashville, few small communities were among those receiving service. As a result of the carrier’s actions, fewer total destinations—and just one small community—received nonstop air service from that city. American and American Eagle had served 32 of the 44 total nonstop destinations out of Nashville, and 2 years later, American served 7 of 34 total destinations. In the year before American’s dehubbing at Nashville, eight small hubs were served out of Nashville, five of which were served by American and American Eagle. Two years later, American and American Eagle had eliminated their small community service from Nashville; another carrier maintained service to one small community. (See fig. 28). When Trans World Airlines (TWA) filed for bankruptcy protection for the third time on January 10, 2001, the airline had been operating a domestic hub out of St. Louis and offering 324 departures daily. By the end of that year, TWA—which had reduced its daily departures to 281—had been acquired by American Airlines. American departures out of St. Louis in 2001 decreased from 17 daily in January to 4 daily in December. In January 2002, American departures increased to 286 daily with the acquisition of TWA. With American’s takeover of TWA, capacity rose slightly in St. Louis while total traffic decreased. The decrease in total traffic occurred in spite of American’s dramatic increase in traffic as it took over TWA. (See fig. 29.) Local traffic, meanwhile, declined 6.1 percent overall. While TWA served a total of 27 small communities before the acquisition, 11 of these were also served by American Airlines. Of the 16 markets that TWA served alone, American maintained service to 13 after the acquisition. Overall, however, more small communities received nonstop service from St. Louis after American acquired TWA. (See fig. 30). In addition to those named above, Joseph Applebaum, Paul Aussendorf, Barbara Bovbjerg, Anne Dilger, David Eisenstadt, Charles J. Ford, David Hooper, Charles A. Jeszeck, Ron La Due Lake, Steven Martin, Scott McNulty, George Scott, Richard Swayze, Roger J. Thomas, and Pamela Vines made key contributions to this report. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Private Pensions: Government Actions Could Improve the Timeliness and Content of Form 5500 Pension Information. GAO-05-294. Washington, D.C.: June 3, 2005. Highlights of a GAO Forum: The Future of the Defined Benefit System and the Pension Benefit Guaranty Corporation. GAO-05-578SP. Washington, D.C.: June 1, 2005. Private Pensions: Recent Experiences of Large Defined Benefit Plans Illustrate Weaknesses in Funding Rules. GAO-05-294. Washington, D.C.: May 31, 2005. Commercial Aviation: Legacy Airlines Must Further Reduce Costs to Restore Profitability. GAO-04-836. Washington, D.C.: August 11, 2004. Private Pensions: Publicly Available Reports Provide Useful but Limited Information on Plans' Financial Condition. GAO-04-395. Washington, D.C.: March 31, 2004. Private Pensions: Multiemployer Plans Face Short- and Long-Term Challenges. GAO-04-423. Washington, D.C.: March 26, 2004. Private Pensions: Timely and Accurate Information Is Needed to Identify and Track Frozen Defined Benefit Plans. GAO-04-200R. Washington, D.C.: December 17, 2003, Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long-Term Risks. GAO-04-90. Washington, D.C.: October 29, 2003.
|
Since 2001 the U.S. airline industry has lost over $30 billion. Delta, Northwest, United, and US Airways have filed for bankruptcy, the latter two terminating and transferring their pension plans to the Pension Benefit Guaranty Corporation (PBGC). The net claim on PBGC from these terminations was $9.7 billion; plan participants lost $5.3 billion in benefits (in constant 2005 dollars). Considerable debate has ensued over airlines' use of bankruptcy protection as a means to continue operations. Many in the industry have maintained that airlines' use of this approach is harmful to the industry. This debate has received even sharper focus with pension defaults. Critics argue that by not having to meet their pension obligations, airlines in bankruptcy have an advantage that may encourage other companies to take the same approach. At the request of the Congress, we have continued to assess the financial condition of the airline industry and focused on the problems of bankruptcy and pension terminations. This report details: (1) the role of bankruptcy in the airline industry, (2) whether bankruptcies are harming the industry, and (3) the effect of airline pension underfunding on employees, airlines, and the PBGC. DOT and PBGC agreed with this report's conclusions. GAO is making no recommendations. Bankruptcy is endemic to the airline industry, owing to long-standing structural challenges and weak financial performance in the industry. Structurally, the industry is characterized by high fixed costs, cyclical demand for its services, and intense competition. Consequently, since deregulation in 1978, there have been 162 airline bankruptcy filings, 22 of them in the last five years. Airlines have used bankruptcy in response to liquidity pressures and as a means to restructure their costs. Our analysis of major airline bankruptcies shows mixed results in being able to significantly reduce costs--most but not all airlines were able to do so. However, bankruptcy is not a panacea for airlines. Few have emerged from bankruptcy and are still operating. There is no clear evidence that airlines in bankruptcy keep capacity in the system that otherwise would have been eliminated, or harm the industry by lowering fares below what other airlines charge. While the liquidation of an airline may reduce capacity in the near-term, capacity returns relatively quickly. In individual markets where a dominant carrier significantly reduces operations, other carriers expand capacity to compensate. Several studies have found that airlines in bankruptcy have not reduced fares and rival airlines were not harmed financially. The defined benefit pension plans of the remaining airlines with active plans are underfunded by $13.7 billion, raising the potential of more sizeable losses to PBGC and plan participants. These airlines face an estimated $10.4 billion in minimum pension contribution requirements over the next 4 years, significantly more than some of them may be able to afford given their continued operating losses and other fixed obligations. While spreading these contributions over more years would relieve some of these airlines' liquidity pressures, it does not ensure that they will avoid bankruptcy because it does not fully address other fundamental structural problems, such as other high fixed costs.
|
CERCLA, often referred to as the “Superfund” law, gave the federal government the authority to respond to actual and threatened releases of hazardous substances, pollutants, and contaminants that may endanger public health and the environment. EPA established the Superfund program to carry out these responsibilities. Data as of September 2011— the most current data available—show that there were 13,856 sites in EPA’s CERCLIS active site inventory, which may require attention under EPA’s Superfund program. Management of these sites, including the special accounts associated with them, has historically been the responsibility of the EPA region in which a site is located. EPA has 10 regional offices, each one responsible for the execution of EPA programs within several states and, in some regions, territories. Figure 1 shows the states included in each of the 10 regions. This section discusses (1) EPA’s process for cleaning up Superfund sites, (2) EPA’s enforcement process for site cleanup, (3) the Trust Fund established under CERCLA, (4) EPA’s use of special accounts for Superfund cleanup, and (5) the EPA IG’s recommendations for better management of these special accounts. EPA’s Superfund cleanup process can be lengthy, sometimes taking decades to clean up contamination to the standards selected for a site. The cleanup process involves a series of steps during which specific activities take place or decisions are made. The first step occurs when the Superfund program is notified of a potential site through various mechanisms, including receipt of citizens’ petitions, and referrals or notifications from states, tribes, and other federal agencies. Following notification, a site undergoes a minimal screening process, called a pre- CERCLIS screening, to determine whether a site assessment process is appropriate. Sites deemed appropriate are added to the CERCLIS active site inventory. During the site assessment process, EPA and states collect data to identify, evaluate, and rank hazardous waste sites based on Hazard Ranking System criteria. Using these criteria, EPA and/or its state and tribal partners conduct a preliminary assessment and, if warranted, a site inspection or other more in-depth assessment to determine whether the site warrants short- or long-term cleanup attention. Sites that EPA determines are among the nation’s most seriously contaminated hazardous waste sites are placed on the National Priorities List (NPL) for attention under the federal Superfund program. Cleanup work under CERCLA generally involves two categories of actions: short- term removal actions that address immediate threats to human health and the environment, and long-term remedial actions that aim to permanently or significantly reduce contamination. Only sites on the NPL are eligible for Trust Fund-financed remedial actions, but sites not listed on the NPL may be remediated with private funds, in some instances with EPA oversight. EPA conducts removal actions at both NPL and non-NPL sites. EPA or a PRP will begin the remedial process by conducting a two-part study of the site: (1) a remedial investigation to characterize site conditions and assess the risks to human health and the environment, among other actions and (2) a feasibility study to evaluate various options to address the problems identified through the remedial investigation. The culmination of these studies is a record of decision, which identifies the selected remedy for addressing the site’s contamination and a cost estimate for implementing the remedy. EPA or the PRP may develop preliminary estimates of construction costs and, as the site moves from the study phase into the remedial action phase, a more accurate cost estimate may be developed. The method of implementation for the selected remedy is then developed during remedial design and implemented during the remedial action phase, when actual cleanup of the site occurs. When all construction of the cleanup remedy at a site is finished, all immediate threats have been addressed, and all long-term threats are under control, EPA generally considers the site to be “construction complete.” Sites where additional work is required after construction is completed then enter into the postconstruction phase, which includes actions such as operation and maintenance and conducting 5-year reviews. When EPA in consultation with the state determines that no further site response is appropriate, then EPA may delete the site from the NPL. Figure 2 illustrates the typical Superfund process for cleaning up a site. Thus, EPA may incur a variety of costs in implementing the Superfund program at particular sites. EPA may spend funds to investigate and clean up sites, including short-term removals at any site, and long-term remedial actions at NPL sites. EPA may also incur costs for oversight associated with a site cleanup where a private party is conducting and funding the cleanup. EPA may enter into agreements with PRPs for those parties to conduct cleanups, compel site cleanups by PRPs, or conduct cleanups itself and seek reimbursement for its costs from those parties. EPA’s enforcement of environmental cleanup at Superfund sites begins with the identification of the PRPs, usually early in the cleanup process; continues throughout site cleanup; and often does not conclude until after the site is declared construction complete. EPA identifies PRPs by, among other actions, reviewing documentation related to the site; conducting interviews with government officials or other knowledgeable parties; performing historical research on the site; sampling soil or groundwater at the site; and requesting additional information from relevant parties. In addition to identifying PRPs, EPA attempts to obtain information on the type and amount of hazardous substances shipped to a site by each party and any financial constraints faced by the identified parties. EPA may begin a cleanup process before it has identified PRPs. However, once it identifies PRPs, it typically seeks to reach a settlement with them on their cleanup responsibilities and/or their payment for cleanup costs that EPA incurs. These negotiations generally may take place at any time throughout the site cleanup process. We have previously found that in reaching these settlements, EPA’s and the PRPs’ decisions are influenced by site-specific characteristics and other key considerations, such as the expected cost of site cleanup, the strength of EPA’s evidence of PRP liability, and the number and type of other PRPs. CERCLA established the Trust Fund to support Superfund program activities. EPA generally can use appropriated monies from the Trust Fund for short-term cleanups and for long-term cleanups of NPL sites. For example, EPA may elect to use such funds at sites for which the parties responsible for site contamination cannot be found or are unwilling or unable to clean up a site, to initiate work pending settlement, or in an emergency. Historically, the Trust Fund received revenue from four major sources: taxes on crude oil and certain chemicals, as well as an environmental tax assessed on corporations based on their taxable transfers via appropriations from the general fund of the income;Treasury; fines, penalties, and recoveries from PRPs; and interest earned on the balance of the Trust Fund. In 1995, the authority for the taxes expired and, as of November 2011, had not been reinstated. As of 2011, the Trust Fund’s primary source of revenue is the transfer from the general fund of the Treasury. At the end of fiscal year 2010, the Trust Fund had total assets and liabilities of $3.74 billion, with nearly 55 percent of that total in special accounts. Section 122(b)(3) of CERCLA allows EPA to retain and use funds received pursuant to an agreement with a PRP for purposes of carrying out an agreement. EPA retains those funds in subaccounts of the Trust Fund called “special accounts.” As part of the settlement, those funds placed in a special account may be used for that specific site or may be transferred by EPA to the general portion of the Trust Fund. EPA’s goal in establishing special accounts is to preserve the use of annual congressionally appropriated funds for cleanup at sites without a viable PRP. EPA regions are encouraged to create and use special accounts as an incentive to secure PRP cleanups and to fund EPA’s cleanup when it has lead responsibility. EPA officials said that they believe that PRPs are more willing to settle when assured that their settlement money will generally be used at the site where they hold liability, rather than at another site. According to EPA guidance, regions should strive to use model (standardized) settlement language to establish special accounts. The model language is intended to allow EPA flexibility in deciding for what specific response actions special account funds can be used and therefore when to use these funds. It allows EPA to use the funds for a response action at the site associated with the account, and EPA guidance states that special account funds are site-specific and are generally not available for EPA to use at other sites. The model language also retains EPA’s authority to transfer funds from a special account to the general portion of the Trust Fund for future appropriation by Congress. EPA guidance notes that the language of the actual agreement governs EPA use of a particular special account’s funds. Generally, funds may be deposited in a special account regardless of whether the settling party is performing the work. According to EPA, the agency typically receives funds as a result of agreements entered when the PRPs are unable or unwilling to perform the response action, as is the case in a bankruptcy or an “ability to pay” settlement for parties facing financial difficulties. EPA may also determine that the hazardous substance contributed by a particular PRP was minimal in amount and toxicity compared with other substances at a site and therefore allow that party a de minimis settlement. In addition, PRPs who are conducting some response actions may make payments to EPA to address past or future response actions. EPA’s costs of overseeing the PRPs’ implementation of the work are usually included in future response costs. These payments may be made for the estimated amount of oversight or on a periodic basis. However, under its guidance, EPA is only to establish a special account for a site where future cleanup work remains at a site. According to EPA officials, they prefer to establish one centralized special account per site because this generally allows them to more easily manage funds for a site, but certain situations may require more than one account for a site. For example, multiple special accounts for one site may be established for amounts that EPA will provide or disburse to PRPs who agree to perform the response work (serving as a settlement incentive for the PRPs to perform the work), or for each separate operable unit or different response action at a site. Once settlement proceeds are deposited in a special account, EPA regional staff enter plans for the use of those funds into CERCLIS. According to EPA guidance, regional staff, such as the sites’ regional remedial project managers, are to evaluate the planned uses of special account funds on an ongoing basis, as warranted by site activity, to ensure that these resources are used efficiently and effectively and make corresponding changes to their planned use as appropriate. The regional staff are to consider both the short- and long-term plans for the site; thus they often plan several fiscal years in advance. According to the guidance, estimates of EPA’s future response costs at a site should be based on the best information available at a given point in time and the best professional judgment of regional staff. Various EPA groups, including regional counsel, regional program management, regional finance, and headquarters staff are all involved in this planning process. In general, according to EPA guidance, special account funds should be used prior to annual congressional appropriations. This guidance establishes priorities for the use of special account funds, referred to as the General Hierarchy of Special Account Use. According to this hierarchy, funds in special accounts should be used to facilitate settlement with PRPs for response actions; used to fund EPA’s costs for response actions; reclassified to reimburse previous EPA site expenditures made from annual congressional appropriations; reclassification is available when an EPA region reasonably estimates that the special account contains more funds than are needed to address all known and potential future work at the site.used by EPA at another Superfund site for the same category of expenditure as the costs being reimbursed; and Funds made available from reclassification may be transferred to the general portion of the Trust Fund, when reclassification has already been considered and is not appropriate, and the special account balance exceeds the estimated known and potential future cleanup costs at that site. In contrast to reclassified funds, transferred funds require a future congressional appropriation to make the funds available for use by EPA. Typically, EPA closes special accounts when (1) all site work has been completed; (2) no funds are left in the account, and no future deposits are anticipated; or (3) EPA does not anticipate incurring any additional costs at those sites. To close an account, the remaining funds in the special account, if any, are then transferred to the general portion of the Trust Fund to increase the balance available for future appropriation for cleanups. Generally funds that are not used for future response work at a site are reclassified or transferred to the general portion of the Trust Fund, rather than returned to PRPs, unless it is specifically written into the settlement that they should be returned to PRPs. or the regions was responsible for managing, overseeing, and coordinating special accounts work. In addition, the IG found that EPA headquarters did not have a structured approach for following up on regional plans to use special account funds to ensure that they were being managed correctly and that EPA lacked detailed guidance and policy on the proper use, management, and monitoring of special accounts funds. As a result, among other things, the IG recommended that EPA designate a central management official for special accounts with responsibility for developing an action plan to ensure that management accountability and related issues regarding special accounts were addressed. The IG stated that this action plan should include, among other things, (1) a process for ensuring completed CERCLIS reports with accurate special accounts data to manage the program and improve performance; (2) an annual planning process— including a determination that regional special account funds will be used consistent with the General Hierarchy—to aid in monitoring special accounts; (3) development of headquarters and regional controls that include follow-up to make sure planned or requested uses (e.g., reclassifications and transfers to the Trust Fund of special account funds) were conducted; and (4) establishment of guidance and policy that addresses the proper application and amount of special account funds that should be reserved for future use. In addition, the IG recommended that EPA regularly analyze the “oldest accounts” for opportunities to better use special account funds. From fiscal year 1990 through October 2010, EPA collected from PRPs over $3.7 billion that it placed in 1,023 special accounts. Nearly half of these funds—$1.8 billion—are still available to be obligated for future Superfund cleanup; and the remaining funds—$1.9 billion—have already been obligated, but not all of these obligated funds have been disbursed.regions reclassified about $131 million from 96 special accounts to pay for previous EPA site expenditures, transferred about $14 million from 39 accounts to the general portion of the Trust Fund, and closed 76 accounts. As of October 2010, of the $3.7 billion that it placed in 1,023 special accounts, EPA held nearly $1.8 billion in unobligated funds in 947 open accounts—accounts that have funds available for use in future cleanup responses at specific sites—at 769 Superfund sites; 503 of these sites are currently on the NPL. The number of special accounts increased significantly from 2001 through 2010: 854 of the 1,023 accounts, or 83 percent, were established during this period. Table 1 shows these accounts by region, with the number of open accounts, sites, NPL designation, and unobligated funds. The majority of available special account funds are concentrated in a small number of special accounts. As of October 2010, 33 open accounts, or 3 percent, had a total of $1 billion available, or 61 percent of the total amount available in special accounts. Table 2 below shows the number of open accounts that have an available balance of less than $500,000, from $500,000 to $10 million, and greater than $10 million. As of October 2010, EPA had plans to obligate 99.8 percent of the $1.8 billion available in special accounts, according to our analysis of EPA CERCLIS data. EPA tracks plans for unobligated special account funds in CERCLIS by three categories: (1) planned obligations; (2) additional reserved uses (estimated costs) not captured as planned obligations; and (3) amounts for work parties (e.g., generally PRPs who have agreed to conduct response work under a settlement agreement) that are included in settlements but have not yet been distributed to them. Specifically: Planned obligations are costs anticipated by EPA to be incurred in association with specific site response actions. Planned obligations are grouped into five categories: removal and removal support, pipeline operations, remedial action, enforcement, and federal facilities. Additional reserved uses are regional staff’s estimated costs for possible or long-term future actions. This category includes 14 different types of potential uses, such as 5-year reviews. Reclassifications and transfers to the Trust Fund are also included in this category, but EPA prefers to break out this information separately for the purpose of evaluating the data in management reviews. Amounts for work parties refers to funds promised in settlements to parties performing the cleanup work at the site; these are amounts that were used as a settlement incentive in negotiations with potential work parties. Figure 3 shows, as of October 2010, how EPA planned to use unobligated funds in special accounts. Planned obligations were made for 64.9 percent—$1.16 billion—of the $1.8 billion in unobligated funds at the beginning of fiscal year 2011, according to our analysis of EPA data. Funds designated for remedial action and pipeline operations made up the largest portions of the $1.16 billion. Approximately $671 million, or 58 percent of these funds, were planned for remedial action, and $418 million, or 36 percent of these funds, were planned for pipeline operations. According to EPA headquarters officials, in some instances, funds for anticipated long-duration Superfund cleanup actions may be planned as much as 40 or 50 years in advance, when there are enough funds in an account to plan that far in advance. Regional data indicate that, of the available special account resources that have planned obligations, more than half are planned to be obligated from fiscal years 2011 through 2013 and, according to EPA officials, 95 percent will be obligated by fiscal year 2022. However, EPA regional officials told us that the special account planned obligations are estimates rather than commitments. According to headquarters and regional officials, planned funds may not be used for their original planned use or may not be used in the originally designated fiscal year for a number of reasons, such as unforeseen issues that arise with Superfund cleanups, especially during the remedial investigation and remedial actions; EPA regional staff responding to national and regional emergencies; or site schedule changes. EPA regional staff must either delete or move forward plans to use special account funds into a subsequent fiscal year when special account obligations for a prior fiscal year are not obligated as previously planned. Table 3 shows CERCLIS information on the number of special accounts and associated obligations planned by type of EPA cleanup activity, enforcement, and federal facilities for fiscal years 2011 through 2070. Additional reserved uses made up 32.6 percent—or $585 million—the second largest portion of the national planned uses for all special account funds in fiscal year 2011. According to EPA officials, they have not yet entered these estimates as specific planned obligations in CERCLIS because the planning of these funds is more challenging to predict or in some cases, there are limitations for how obligations can be entered into CERCLIS. For example, only one 5-year review can be entered as a planned obligation in CERCLIS as a site financial transaction. The “outyear five year review” field may be used to enter estimates for subsequent 5-year reviews. Other funds under this category represent such items as potential EPA work takeover from a PRP where the PRP does not have adequate or liquid financial assurance and anticipated costs prior to determination of the final remedy for cleanup. This category also includes special account funds that are planned for reclassification or transfer to the Trust Fund, which we discuss in greater detail later. Amounts for work parties were approximately $42.2 million—or about 2.4 percent of the $1.8 billion in unobligated special account funds—as promised in settlements, as of the beginning of fiscal year 2011, according to EPA regions’ planning documents. These funds will be disbursed to work parties as they submit claims for reimbursement to EPA, in accordance with milestones established in the settlement documents. The unassigned remaining balance was approximately 0.2 percent of special account funds, or approximately $3 million, as of October 2010. According to the national special accounts coordinator, EPA headquarters generally permits a small amount of the total balance of unobligated special account funds to remain unplanned. However, according to an EPA official, headquarters staff question regions when the unassigned remaining balance of a special account is generally more than 10 percent of its available balance, more than 10 percent of the total available funds for the region, or more than $100,000 per account. As of October 2010, of the total accumulated receipts of $3.7 billion in 1,023 special accounts, about $1.9 billion had been obligated for site specific response work, according to our analysis of EPA data. Of this total, EPA had disbursed approximately $1.6 billion for Superfund cleanup expenses; the remaining $247 million in obligations had not yet been disbursed (i.e., unliquidated obligations). Furthermore, EPA regions have not disbursed any funds from 240 of the 947 open special accounts with a total of $228 million available in these accounts. Twenty-five of these 240 accounts include unliquidated obligations. According to officials from regions that had large numbers of special accounts with no disbursements, there are numerous reasons for not having disbursed any funds from these accounts for site cleanup. For example, one regional official stated that funds are often deposited in special accounts early in the Superfund site cleanup process, sometimes years before cleanup at the site actually begins. Also, regions often retain funds in special accounts for contingency purposes if cleanup plans change (e.g., potential EPA work takeover at a later date). Another regional official stated that numerous special accounts in the region had recently received sizeable deposits from a large bankruptcy settlement, and the funds have been planned for but not obligated as of the time of our review. Historically, EPA has conducted few reclassifications or transfers of funds to the general portion of the Trust Fund. Recently, however, EPA regions have begun to reclassify more funds from open special accounts. While all EPA regions reclassified about $131 million from 96 special accounts during the 20-year period from fiscal years 1990 through 2010, about $111 million, or about 85 percent, of this amount was reclassified during the last 3 years of this period. In addition, since fiscal year 1990, EPA has transferred about $14 million from 39 special accounts to the Trust Fund. According to EPA guidance, for reclassification and transfer, regions should provide planning estimates for the current fiscal year, as well as the two subsequent fiscal years. According to EPA data, for fiscal years 2011 through 2013, about 4.1 percent—$74 million—of the total $1.8 billion in unobligated special account funds were designated for reclassification or transfer to the Trust Fund. Of the $74 million, EPA regions plan to reclassify about $61 million, or about 82 percent, from special accounts and to transfer the remaining $13 million. EPA’s plans to reclassify or transfer funds may change as a result of changing site conditions throughout the Superfund cleanup process. For example, in fiscal year 2010, EPA planned to reclassify $43 million and transfer about $4 million; however, at the end of the fiscal year, EPA actually had reclassified only $26 million and transferred about $3 million. Tables 4 and 5 show, by region, the number and value of special accounts planned for reclassification and transfer for fiscal years 2011 to 2013, as of October 2010. EPA guidance states that reclassifications and transfers to the Trust Fund should not take place until it can be reasonably estimated that the special account contains more funds than are needed for remaining response actions at the site. Therefore, according to EPA officials, it is generally easier for the regions to determine if special account funds can be reclassified or transferred once response actions at a site are substantially complete. EPA officials said that they consider “Construction Complete,” “NPL Delete,” and “Post Construction” to be the three phases in the Superfund cleanup process when a site most likely can be considered substantially complete and, therefore, the funds in a special account may be considered for reclassification or transferred to the Trust Fund. As of October 2010, the cleanup activities at 297 of the 769 Superfund sites with open accounts were considered substantially complete, according to our analysis of EPA data, and therefore more likely to have funds eligible to be reclassified or transferred than sites in the earlier stages of the cleanup process. Table 6 presents the number of Superfund sites with special accounts in each phase or milestone of the cleanup process, from the remedial investigation/feasibility study to deletion from the NPL, as of October 2010. EPA requires special accounts to be closed when (1) all site work has been completed, (2) no funds are left in the account and no future deposits are anticipated, or (3) EPA does not anticipate incurring any additional costs at those sites. As of October 2010, EPA had closed 76 special accounts since the beginning of the program; these accounts were open an average of 7 years. According to our analysis of EPA data, EPA closed 9 special accounts from fiscal years 2000 through 2005 and 67 accounts from fiscal years 2006 through 2010, including 33 in fiscal year 2010. According to EPA officials, the steady increase in closures has occurred because the regions are improving their management of special accounts and closing accounts where no funds remain or funds are no longer needed for future work. In response to the IG’s findings and recommendations,officials’ own recognition that the agency needed to provide better oversight of the special accounts process, EPA has implemented the following processes and policies in the last few years to better monitor and manage special accounts: (1) processes to better plan the use of special account funds, (2) increased oversight of special accounts by designating a national special accounts coordinator, and a Special Accounts Senior Management Committee, and (3) strategies and guidance on how to plan for using and monitoring special accounts. To facilitate regional management and headquarters planning and review of special accounts, in 2008, EPA established a process to better track planned uses for special account funds. That is, it established a section in CERCLIS—referred to as the Special Account Management Screen (or planning screen)—that enables EPA regions to see and enter special account planning data into specific data fields and create reports so that both EPA headquarters and regional officials can monitor the special account balances against planned obligations for ongoing and future site- specific response activities. According to EPA officials, reports based on data entered into fields on this screen have allowed both EPA headquarters and regional staff to review the data to assure that, among other things, the agency maximizes opportunities to use, reclassify, or transfer these resources to the general portion of the Trust Fund over time. In the first few years of using this new planning screen, regional staff noticed that some funds did not easily fit into the specified categories in the screen, and they had to place the funds under other catchall fields, according to EPA headquarters officials. As a result, in December 2010, EPA added four new fields under the reserved use section and combined two categories into one. These changes are intended to allow EPA headquarters staff to better track the regions’ funding plans and to comply with guidance issued in September 2010 clarifying how special account funds are to be planned and used. For example, EPA headquarters created a new field for funds reserved for a potential EPA takeover of the work if one or more PRPs who are performing the site cleanup work become insolvent; in such cases, EPA might have to fund necessary work with special account funds. According to officials we spoke with in one EPA region, these changes have helped reduce the number of questions and concerns from headquarters during its reviews of regions’ plans to allocate funds. Before the IG’s 2009 report, EPA had recognized that it needed to better oversee the regions’ management of special accounts, according to an EPA headquarters official. In 2008, an EPA staff person in the Office of Superfund Remediation and Technology Innovation—with assistance from staff from other offices such as the Office of Site Remediation Enforcement and Office of the Chief Financial Officer—was permanently assigned to coordinate the management of special accounts with the regions. This headquarters staff person—the national special accounts coordinator—conducts annual and midyear reviews and holds discussions with regional staff to evaluate regions’ plans to allocate special account funds, among other things. Specifically, every August, in preparation for the annual review of these funds, the coordinator analyzes planning data on all open special accounts using monitoring reports developed from CERCLIS data to ensure that regions are entering quality data into CERCLIS, complying with special account guidance, and effectively managing special accounts. The coordinator told us that she focuses on particular details of a special account in the annual review that may indicate a potential for special account management problems. In particular, the coordinator examines accounts that (1) disbursed no funds, and (2) have a large amount of funds remaining although construction is complete or the site has been deleted from the NPL, and (3) those accounts that are 10 years old or more. In addition, the coordinator stated that she looks closely at certain types of special accounts on a regular basis, such as the following: Accounts with balances over $10 million. The coordinator examines these accounts—which make up 61 percent of available special account balances—to see the types of actions that have occurred. For example, the coordinator told us that she checks whether planned funding for prior fiscal years was actually obligated and disbursed as planned. Specifically, she looks for any indications that a region might be continually shifting the same planned obligated funds from one fiscal year to the next and, if so, investigates the reasons for this shift. Accounts that had planned reclassifications, transfers to the Trust Fund, or planned closures. The EPA coordinator told us that she regularly examines whether these actions have occurred. For example, according to EPA data, in fiscal year 2010, regions planned to reclassify $43.1 million from 75 accounts at the beginning of the year; however, the coordinator found that regions had actually reclassified $26.2 million from 41 accounts by the end of that fiscal year. For those 34 accounts where reclassification was not completed, 20 accounts had their planned reclassifications moved to future fiscal years, 10 accounts were identified as needing funds for further work at the site, and 4 had their planned reclassified funds transferred instead to the Trust Fund. In addition, according to the coordinator, beginning in fiscal year 2011, EPA focused on reviewing those special accounts with available balances less than $10,000 to help ensure that funds are used as quickly as possible so that the accounts can be closed. According to the coordinator, this review allows the regions to focus their workload efforts on managing the larger special accounts rather than the many accounts with relatively few funds. According to the coordinator, during this annual review, she may pose questions regarding the regions’ planning estimates and suggest certain actions to ensure better management of specific special accounts. For example, the coordinator might suggest that the regions (1) use the funds in a special account as an incentive for future settlements with PRPs, (2) reclassify or transfer to the Trust Fund unneeded funds and close the account when a site cleanup is completed or near completion, (3) correct and update the account (such as entering funds in the proper planning category), (4) use special account funds before using appropriated funds, or (5) move previously planned obligated funds that have not yet been obligated in a previous fiscal year to a future fiscal year. For example, during the special account fiscal year 2011 annual review conducted in 2010, the coordinator asked questions regarding the regions’ planning estimates on 285 special accounts. Based on our analysis of these planning data, we identified 65 questions or suggestions related to whether there was potential to reclassify or transfer to the Trust Fund some or all account funds or close an existing account. The remaining questions or suggestions identified a variety of subjects, including whether the planned funding was put in the wrong category on the special accounts planning screen in CERCLIS, and whether special account funds could be used as an incentive for the PRP to do the work. During the annual review, regional staff agree to make changes or adequately explain the reasons why the coordinator’s suggestion should not be taken at that time. According to EPA regional officials we spoke with, all of the coordinator’s questions during the fiscal year 2011 annual review were addressed before the next midyear planning sessions conducted in the spring of 2011. However, according to a regional official and the coordinator, a question from a previous planning session may be asked again in subsequent planning sessions if it was not entirely or sufficiently addressed. In some cases, the coordinator stated she wanted to obtain more detailed information from the region to ensure that funds were planned for use in the most effective and efficient way possible. In other cases, while funds were planned in accordance with guidance, the regional officials and the coordinator had a difference of opinion on the best planned use of funds. To examine this process in more detail, we sampled 20 accounts from EPA headquarters’ 2010 annual planning review of 285 accounts about which the coordinator had questions. For all of these accounts, the regions addressed all the coordinator’s questions or took the recommended action requested or suggested by the coordinator. For example, the coordinator questioned $989,000 in special account funds that were placed in the “Other” field without a detailed explanation on when the determination for use of these funds would be made. As a result of discussions with EPA headquarters and a change to the planning screen, these funds were moved to a new field created in December 2010—”Protectiveness Contingencies.” According to EPA documentation, this field should be used when current site information indicates there is reasonable potential that a remedy will not be protective in the future. For the account in question, EPA regional officials determined that available special account funds for the site would still be needed to protect nearby residences from the effects of a hazardous chemical contamination— Trichloroethylene (TCE). The remedy chosen for this TCE contamination in the groundwater—an underground drainage system— had not resulted in lower contamination levels, and EPA had found vapor intrusion in the crawl spaces of houses located at the site. EPA determined that it would need the special account funds to assess whether new migration systems need to be installed to prevent inhalation of TCE from the crawl spaces. According to the coordinator, once the planning discussions take place, the regions are expected to make any corrections to planned special account funds in CERCLIS. The coordinator then reviews the data again and uses this information to establish a baseline for fiscal year planning. In addition, the coordinator provides an annual work planning review report to EPA management in December. In March of the following year, the coordinator conducts midyear reviews to follow up on regional issues and to monitor planned actions previously identified. EPA headquarters holds bimonthly national conference calls with regional officials to discuss any special accounts issues that have arisen and to discuss possible changes to the special accounts process to make it more efficient. Several regional officials we spoke with stated that, between the conference calls, and other events, such as the annual Superfund Special Accounts National Meeting and Cost Recovery Training Conference, staff are provided with the information they need to effectively manage special accounts. In 2009, in response to the IG’s recommendation that a central management official in headquarters for special accounts be established to ensure management accountability, EPA established a Special Accounts Senior Management Committee. Unlike the coordinator, who has daily responsibility for special accounts, the committee has broader responsibilities. It meets semiannually to provide overall management oversight and monitor the status of special accounts. The IG had recommended that a single office in headquarters be responsible for the management of special accounts, but EPA officials told us that the agency did not think this was a workable arrangement because the management of special accounts requires the involvement of, and coordination among, several EPA offices, including the regional offices. The committee consists of directors from EPA headquarters offices involved in the special accounts process, including the Office of Superfund Remediation and Technology Innovation, the Office of Site Remediation Enforcement, and the Office of the Chief Financial Officer, as well as directors of relevant Superfund divisions from the regions. Regional representation is rotated among regions every 2 years. A committee charter lists the responsibilities of each office in managing special accounts. According to several EPA regional officials we spoke with who have responsibilities for special accounts, the level of coordination and transparency in managing special accounts between headquarters and the regions has improved over the last few years. For example, one regional official stated that the high level of coordination is evident from headquarters’ review of regional planning data and related meetings to discuss potential issues with specific special accounts. Another regional official stated that headquarters has been very responsive, sharing information and obtaining policy viewpoints from the region, and implementing ways to streamline and improve the process. EPA has issued new strategy and guidance documents to help manage special accounts in response to the IG’s recommendations and EPA headquarters’ own recognition that the agency needed to provide a more nationally consistent approach to managing and monitoring special accounts. Specifically, EPA established the Superfund Special Accounts Management Strategy in 2009. This strategy sets forth the agency’s plan to improve the use, management, and monitoring of special accounts to help support Superfund site cleanups. According to EPA documentation, this strategy serves as a road map for EPA regional and headquarters personnel who are responsible for overseeing and managing special accounts. The Special Accounts Senior Management Committee is responsible for implementing this strategy. The strategy focuses on four main areas: (1) coordination and transparency, such as intraagency coordination between the EPA offices that are responsible for managing special accounts; (2) special account use and planning efforts, such as effective regional planning and use of the CERCLIS special account planning screen; (3) monitoring special accounts, such as annual regional work planning and midyear reviews; and (4) regional support, guidance, and training. EPA has also issued guidance on the (1) planning, use, and monitoring of special account funds and (2) reclassifying special account funds, transferring funds to the Trust Fund, and closing special accounts. Planning, use, and monitoring of special account funds. EPA’s special account guidance, issued in 2010, updated and expanded previous EPA guidance that was originally published in 2001 and 2002. This newer guidance provides specific information on the proper use and planning of special account funds throughout the cleanup process. For example, EPA generally expects that planning for the use of special account funds occurs within 3 months after establishing a special account, and planning should be updated on a regular basis during the year. According to EPA regional officials we spoke with, they try to plan for the use of special account funds as soon as possible. However, according to some regional officials, various circumstances can affect whether planning can occur within 3 months. For example, they noted, large special accounts usually are associated with large, complex cleanup sites and therefore it is likely to take longer to plan how funds will be used. At the same time, officials said, these accounts are often a priority because of the hazards involved. The workload required to plan and manage accounts with large balances may result in less time available to plan for accounts with smaller balances. EPA also issued a detailed Monitoring Plan for Special Account Planning Data in 2009, which EPA updated in November 2010. This plan describes the process that EPA headquarters and the regions should follow to monitor special account planning data, including the scheduled times when middle of fiscal year and end of fiscal year final planning data should be reviewed and discussed with the regions, made final, and reported to the Special Accounts Senior Management Committee. Reclassifying special account funds, transferring funds to the Trust Fund, and closing special accounts. EPA headquarters issued detailed guidance on the reclassification of special account funds in 2009, including when EPA regions should consider doing a reclassification and a step-by-step description of the reclassification process the regions should follow. At the same time, it also issued a model memorandum for transferring funds from a special account to the Trust Fund and closing out a special account. In this guidance and memorandum, EPA states that regions must notify headquarters when they plan to reclassify funds, transfer funds to the Trust Fund, or close out an account. The regions must submit a draft memorandum to headquarters staff in the Office of Site Remediation Enforcement and the Office of Superfund Remediation and Technology Innovation to discuss any potential issues with the action prior to proceeding with the action. However, regional officials we spoke with stated that the process for reclassifying funds has been complex and that the requirements for conducting a reclassification were resource intensive and time consuming. For example, regions had to submit a memorandum with detailed information on the special account—no matter how small the amount of funds to be reclassified. As a result, in April 2011, EPA headquarters issued revised model notifications to streamline the process and accelerate the review process for reclassifications of funds, transfer of funds to the Trust Fund, and closing of a special account. According to EPA officials, the most significant change eliminates the requirement for a formal memorandum for those transactions that involve $200,000 or less. For those transactions, EPA regions now only need to send an e-mail to headquarters staff informing them of the intended action and provide the appropriate assurances in accordance with guidance (e.g., if an account is to be closed, the region does not anticipate any future deposits). For transactions involving $200,000 or more, regions still need to send formal notification memoranda to headquarters. However, these actions can now be included in the same memorandum, rather than separate memorandums if (1) a transfer of funds to the Trust Fund or closeout occurs at the same time or immediately following a reclassification or (2) a closeout occurs at the same time or immediately following a transfer of funds to the Trust Fund Several regional officials we spoke with stated that, overall the increased guidance has been helpful, and that EPA headquarters has done a thorough job of establishing any needed special accounts policy and guidance documents, addressing all major aspects and issues relating to the management of special accounts. In addition, officials from EPA regions 3, 5, and 9 stated that their regions have added management tools specific to their regions to enhance those available from headquarters, including a special accounts regional database, intranet site page, and guidance specific to their regions, respectively. An EPA headquarters official stated that an increased emphasis has been made in the last few years in headquarters to better manage the regions’ special account process, particularly the “back end” of the process involving special account reclassifications, transfers to the Trust Fund, and account closures. However, EPA headquarters officials told us that the agency has recognized it may need to be more involved in the “front end” of the special accounts process, when regions are deciding whether to establish a special account. EPA headquarters is evaluating whether it is efficient for the regions to open special accounts for small amounts, or for those sites that may be further along in the cleanup process, because regions need to spend time and staff resources monitoring these new accounts. An EPA official stated that EPA’s Superfund Special Accounts Senior Management Committee has approved a review of the policy for establishing special accounts. The committee plans to begin this study in fiscal year 2012 and, if needed, develop further guidance on the opening of special accounts. Furthermore, according to several of these regional officials, the changes to notifications and revised model memorandum have made the reclassification and transfer processes easier by minimizing the number of staff who need to prepare and approve reclassifications and transfers while saving time by eliminating extensive preparation and headquarters review of notifications. The revised processes allow special account funds to be reclassified and transferred faster. In addition, in June 2011, EPA issued two fact sheets to regional special accounts staff that provide supplemental information on special account reclassifications, as well as specific steps required to close out a special account. We provided a draft of this report to EPA for review and comment. EPA provided technical comments that we incorporated into the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Administrator of EPA, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This appendix provides information on the scope of the work and the methodology used to (1) describe the status of special accounts— including balances, locations, and recent and planned uses—of Superfund special accounts and (2) examine the extent to which the Environmental Protection Agency’s (EPA) headquarters and regions are implementing processes and policies to improve the monitoring and management of Superfund special accounts. To describe the status of the 1,023 special accounts, we obtained and analyzed data from EPA’s Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLlS) database. Specifically, for information on special accounts balances; locations; and recent and planned uses, including funds that had been obligated and disbursed, as well as funds reclassified, transferred to the Superfund Trust Fund, or closed, we analyzed spreadsheets obtained from officials with EPA’s Office of Superfund Remediation and Technology Innovation. For those funds that were planned to be obligated and reserved for future use, as well as reclassified and transferred, we analyzed spreadsheets of planned funds derived from EPA’s CERCLIS special accounts planning screen, as of October 2010. We also obtained data and interviewed officials at EPA’s headquarters and the Office of the Chief Financial Officer regarding the makeup and current status of funds in the Trust Fund. To assess the reliability of the data from EPA’s CERCLIS database used in this report, we analyzed related documentation, examined the data to identify if there were any obvious errors or inconsistencies, and interviewed knowledgeable agency officials about the data to see if there were any known problems with the data and to learn more about their procedures for maintaining the data. We determined the data to be sufficiently reliable for the purposes of this report. To examine the extent to which EPA’s headquarters and regions are implementing policies and procedures to improve the monitoring and management of Superfund special accounts, we analyzed documents and interviewed officials from EPA headquarters offices, including the Office of Superfund Remediation and Technology Innovation, the Office of Site Remediation Enforcement, and the Office of the Chief Financial Officer. Specifically, we obtained and reviewed strategies and guidance issued by EPA headquarters to the regions on the special accounts process, as well as available EPA regional documentation on the use of management tools unique to EPA’s regions. We also analyzed documents and interviewed officials from EPA’s Office of Inspector General (IG) regarding their 2006 and 2009 reports on EPA’s management of special accounts. We also conducted interviews with officials in each of EPA’s 10 regional offices and collected supporting documentation to determine how the regions managed and monitored their special accounts and coordinated with EPA headquarters and, if needed, conducted follow-up interviews to obtain additional data as a result of our analysis. During these interviews, we also obtained information on how EPA regions addressed EPA headquarters questions or recommendations for actions on their special accounts that arose during EPA’s fiscal year 2011 annual review of the regions’ planning data. Specifically, we discussed and obtained data on 20 special accounts (2 special accounts from each region) taken from a sample of 285 accounts about which EPA had questions or recommendations for actions. These accounts were chosen from a random nonprobability sample and therefore cannot be generalized to all special accounts. We conducted this performance audit from October 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Vincent P. Price, Assistant Director; Greg Carroll; Laina Poon; and Amy Ward-Meier made key contributions to this report. Elizabeth Beardsley, Cindy Gilbert, Julia Kennon, Ruben Montes de Oca, and Carol Herrnstadt Shulman also made important contributions.
|
Under the Superfund program, EPA has the authority to enter into agreements with potentially responsible parties for them to conduct a cleanup at hazardous waste sites or compel potentially responsible parties to do so. EPA can also conduct cleanups itself and then seek reimbursement. EPA is authorized to retain and use funds received from settlements with these parties in interest-earning, site-specific special accounts within the Trust Fund. These accounts provide resources in addition to annual appropriations to clean up sites. The number of accounts grew slowly until 1995 when EPA encouraged their greater use. After 1995, their number and dollar value increased. EPA headquarters is responsible for overseeing its regions management of special accounts. In two reports issued in 2006 and 2009, the EPA IG made recommendations to EPA to better manage these accounts. As requested, this report examines the (1) statusthat is, balances, locations, and recent and planned usesof Superfund special accounts, and (2) extent to which EPAs headquarters and regions have implemented processes and policies to improve the monitoring and management of these accounts. GAO analyzed EPA Superfund program data, guidance, and strategies, and interviewed EPA officials. GAO is not making recommendations in this report. GAO provided a draft of this report to EPA for review and comment. EPA provided technical comments that were incorporated into the report, as appropriate. From fiscal year 1990 through October 2010, the Environmental Protection Agencys (EPA) 10 regions collected from potentially responsible parties almost $4 billion in funds that were placed in special accounts. Nearly half of these funds are still available to be obligated for future Superfund cleanup; the remaining funds have already been obligated, but not all of these obligated funds have been disbursed. According to GAOs analysis of EPA data, EPA has plans to obligate almost all of the available funds in special accounts over the next 10 years. However, EPA regional officials told GAO that special account funds that are planned to be obligated are estimates rather than commitments, and the planned use of funds often changes as site circumstances warrant. As of October 2010, of the $1.9 billion funds that EPA had obligated for Superfund cleanup expenses, $1.6 billion had been disbursed. According to GAOs review of EPA documents and interviews with agency officials, EPA has taken steps, including implementing strategies and guidance, in the last few years to better monitor and manage special accounts. EPA took these steps in response to the EPA Inspector Generals (IG) findings and recommendations, as well as EPA officials own recognition that the agency needed to provide better oversight of the special accounts process. These steps include the following: processes to better plan for the use of special account funds by adding a screen in the agencys Superfund database that enables EPA regions to enter special account planning data into specific data fields and create reports, so that officials can monitor the special account balances against planned obligations for ongoing and future site-specific response activities; increased oversight of special accounts, including designating a national special accounts coordinator who, among other things, conducts annual and midyear reviews and holds discussions with regional staff to evaluate their plans to allocate special account funds, and establishing a Special Accounts Senior Management Committee that meets semiannually to provide overall management oversight and monitor the status of special accounts; and strategies and guidance on how to plan for using special accounts, including an agencywide strategic plan, overall guidance for the regions on the proper use and planning of special accounts funds throughout the cleanup process, detailed guidance on the reclassification process, and a model memorandum for transferring funds from a special account to the Hazardous Substance Superfund Trust Fund (Trust Fund) and closing out a special account.
|
The military’s disability evaluation process begins with the identification of a medical condition that could render the servicemember unfit for duty. On the basis of medical examinations, a medical evaluation board (MEB) documents any conditions that may limit a servicemember’s ability to serve in the military. The servicemember’s case is then evaluated by a physical evaluation board (PEB) to make a determination of fitness or unfitness for duty. If the servicemember is found to be unfit due to medical conditions incurred in the line of duty, the PEB assigns the servicemember a combined percentage rating for those unfit conditions using VA’s rating system as a guideline, and the servicemember is discharged from duty. This disability rating, along with years of service and other factors, determines subsequent disability and health care benefits from DOD. Appendix II provides additional background information about the MEB and PEB processes. As servicemembers in the Army navigate DOD’s disability evaluation process, they interface with staff who play a key role in supporting them through the process. MEB physicians play a fundamental role because they are responsible for documenting the medical conditions of servicemembers for the disability evaluation case file. In addition, board physicians may require that servicemembers obtain additional medical evidence from specialty physicians, such as a psychiatrist. Throughout the MEB and PEB processes, a board liaison serves a key role by explaining the process to servicemembers, and ensuring that the servicemembers’ case files are complete before they are forwarded for evaluation by the PEB. The board liaison informs servicemembers of board results and of deadlines at key decision points in the process. The military also provides legal counsel to servicemembers in the disability evaluation process. The Army, for example, has a policy to provide legal counsel anytime upon request and to assign legal representation at formal PEB hearings, although servicemembers may retain their own representative at their own expense. In addition to receiving benefits from DOD, veterans with service- connected disabilities may receive compensation from VA for lost earnings capacity. Although a servicemember may file a VA claim while still in the military, he or she can only obtain disability compensation from VA as a veteran. VA will evaluate all claimed conditions, whether or not they were evaluated previously by the military service’s evaluation process. If VA finds that a veteran has one or more service-connected disabilities with a combined rating of at least 10 percent, the agency will pay monthly compensation. The veteran can claim additional benefits over time, for example, if a service-connected disability worsens or surfaces at a later point in time. In response to the deficiencies reported by the media, GAO, and the Army Inspector General about the care its injured and ill servicemembers received, the Army took several actions, including, most notably, initiating the development of the AMAP in March 2007. The plan, designed to help the Army become more patient-focused, includes tasks for automating portions of the disability evaluation process and maximizing coordination of efforts with VA. As part of the AMAP, the Army also developed a new organizational structure—Warrior Transition Units—to provide a more focused continuum of care and services to both active-duty and reservist servicemembers. Within each unit, the servicemember is assigned a primary care manager, a nurse case manager, and a squad leader to manage the servicemember’s medical treatment and help ensure that the needs of the servicemember and his or her family are met. In May 2007, DOD established the Senior Oversight Committee to bring high-level attention to addressing the systemic problems associated with the care and treatment of returning servicemembers. The committee is cochaired by the Deputy Secretaries of Defense and Veterans Affairs and also includes the military service secretaries and other high-ranking officials within DOD and VA. To conduct its work, the committee established workgroups to address specific issues, including the disability evaluation system. Originally intended to expire in May 2008, the committee was extended to January 2009. Under the direction of the Senior Oversight Committee, DOD and VA are piloting a joint disability evaluation system to improve the timeliness and resource use of DOD’s and VA’s separate disability evaluation systems. Begun in November 2007, the pilot involves cases at three Washington, D.C.-area military treatment facilities, including Walter Reed Army Medical Center. Key features of the pilot include (see fig. 1): a single physical examination conducted to VA standards as part of the disability ratings prepared by VA, for use by both DOD and VA in determining disability benefits; and additional outreach and nonclinical case management provided by VA staff at the DOD pilot locations to explain VA results and processes to servicemembers. The Army has taken a number of steps to help servicemembers navigate the disability evaluation process through additional supports and streamlining efforts, such as expanding support to servicemembers by hiring more board liaisons and legal personnel. In addition, the Army has established a staffing ratio for board physicians who document servicemembers’ medical conditions. Nevertheless, the Army continues to struggle with meeting internal goals for the staffing and timeliness of processing disability evaluation cases. In addition, the Army’s increased staffing, outreach efforts, and other supports may be insufficient to ensure that servicemembers understand the process and are aware of their legal rights. The Army faces challenges in demonstrating an impact on servicemember satisfaction, in part because the Army has not yet implemented a satisfaction survey that adequately targets and queries servicemembers who are undergoing disability evaluation. As part of the AMAP, the Army established staffing goals for staff who are key to helping servicemembers navigate the disability evaluation process. Specifically, the Army established caseload targets for board liaisons and board physicians, and articulated the need to provide servicemembers with access to legal counsel at the beginning of the process. For board liaisons—who explain the disability process to servicemembers and are responsible for ensuring that their disability case files are complete—the Army established for the first time a caseload target of 30 servicemembers per liaison in June 2007. At the same time, for board physicians—who evaluate and document servicemembers’ medical conditions for the disability evaluation case file—the Army established a caseload target of 200 servicemembers per physician. Although a caseload target was not set for legal counsel, the Army proposed dedicating 57 additional legal staff at 19 of its 35 treatment facilities to help servicemembers gain access to legal counsel prior to the formal board hearings when counsel is normally assigned. The Army has expanded hiring efforts for board liaisons, but it faces challenges in keeping up with the increased demand for the liaisons’ services. From August 2007 to June 2008, the number of board liaisons grew from 160 to 221—a 38 percent increase Army-wide—and the average caseload per liaison declined from 46 to 29 servicemembers. However, as of June 2008, the Army had not met its internal staffing goal of 30 servicemembers per liaison at 14 of its 35 treatment facilities, and about 70 percent of servicemembers in the disability evaluation process were located at facilities with shortages (see fig. 2). Liaisons we spoke with at one of the locations with the highest average caseloads had difficulty in making appointments with servicemembers, which challenged their ability to provide timely and comprehensive support. While the Army plans to hire additional board liaisons, it has encountered difficulty in attracting qualified liaisons at some locations due in part to their remote location. The Army’s ability to meet internal staffing goals is also affected by increases in demand. According to Army data, the total number of servicemembers completing the MEB increased about 19 percent from year-end 2006 to year-end 2007. Regarding MEB physicians, the Army has mostly met its goal for the average number of servicemembers at each treatment facility, but challenges with physician staffing remain. As of June 2008, the Army met its goal of 200 servicemembers per board physician at 28 of 35 treatment facilities. However, 47 percent of servicemembers undergoing disability evaluation are located at the 7 facilities that did not meet the goal. In addition, according to Army officials, physicians are having difficulty in managing their caseloads, even at locations where they have met or are close to the Army’s goal of 200 servicemembers per physician. Several physicians and Army officials told us that the Army could provide better service to servicemembers if more physicians were available to conduct medical evaluations. To help improve case processing, in July 2008 the Army changed the target staffing ratio for board physicians from 200 servicemembers to 120 servicemembers per physician. Some Army physicians told us that the ratio of servicemembers per physician allows little buffer when there is a surge in caseloads at a treatment facility, and that delays in case processing result from these imbalances. A mobile unit—comprising a board physician, a board liaison, and other staff—has been deployed since 2004 in the Army’s southeast region. According to an Army official who works with the mobile unit, its deployment has helped reduce backlogs where it has been deployed, but such units are not used throughout the Army. In addition to gaps in board liaisons and board physicians, staffing of legal personnel who provide counsel to injured and ill servicemembers throughout the disability evaluation process is currently insufficient. According to the Army, servicemembers should receive legal assistance upon request during both the MEB and PEB processes. While servicemembers may seek legal assistance at any time, the Army’s policy is to assign legal staff to servicemembers when their case goes before a formal PEB. As of June 2008, there were 28 total staff—20 attorneys and 8 paralegals, located at 5 of 35 Army treatment facilities—dedicated to providing assistance to servicemembers undergoing disability evaluation (see fig. 3). In April 2008, the Army recognized that the current staffing was insufficient and approved the hiring of 36 permanent legal personnel—1 attorney and 1 paralegal at each of 18 locations. Although these additional staff—which the Army is in the process of hiring—will help, their number falls short of the originally proposed 57 staff. According to an Army official involved in legal staffing, the 36 additional staff will still be insufficient to achieve the Army’s goal of providing comprehensive legal support early in the evaluation process. Moreover, some of the legal personnel already in place serve on a temporary basis. Therefore, their replacements will need to learn about military disability evaluation regulations and processes, which involves a substantial learning curve and could pose a challenge to service delivery and quality of legal counsel. Army officials also told us that an evaluation is being conducted to determine if additional attorneys should be hired, and that they expect the evaluation to be completed by year-end 2008. Although the Army generally meets DOD’s timeliness goal for the PEBs to process cases, it has had less success in meeting timeliness goals for the MEBs. In 2007, the Army satisfied the DOD-standard that 80 percent of PEB cases should be processed within 40 days. On average in 2007, PEB cases were processed in 28 days. In terms of the MEBs, the Army has a goal of completing 80 percent of cases within 90 days, and meeting a DOD standard that the final administrative and counseling part of the MEB process be completed within 30 days for at least 80 percent of cases. From January to March 2008, 24 of 35 medical facilities did not meet the Army’s 90-day goal for the timely processing of MEB cases. In addition, the percentage of cases Army-wide that have met the goal in a recent 12-month period has trended downward; from April through June 2007, 68 percent of cases met the goal, compared with 55 percent from January through March 2008. Similarly, from January to March 2008, 29 of 35 medical facilities did not meet DOD’s 30-day goal for transferring cases to the PEB. According to Army officials, several factors have challenged the Army’s ability to complete medical board cases in a timely way. In addition to the increase in the number of cases and the shortage of medical board physicians, timely case processing is also challenged by the increasing complexity of cases being evaluated and the shortages of specialist physicians who help perform medical evaluations. For example, the incidence of complex conditions, such as PTSD, that the Army must evaluate has more than doubled, from 4.3 percent in 2005 to 9.5 percent in 2007. According to Army officials, shortages of specialist physicians, such as psychiatrists who can perform required evaluations, have contributed to delays in case processing. According to an Army official in charge of mental health staff planning, the Army has plans to hire additional psychiatrists—which is consistent with recommendations made by a DOD task force on mental health—but it faces challenges in reaching its goals quickly, in part, due to the difficulty of attracting psychiatrists to work for the Army. The Army faces particular challenges in meeting timeliness goals for completing reservists’ MEBs and PEBs. In 2007, reservists comprised about 20 percent of servicemembers undergoing disability evaluation in the Army. The average time to complete the MEB and PEB processes in 2007 was 149 days for reservists, compared with 107 days for active-duty servicemembers (see fig. 4). According to Army officials, disability case processing for reservists is treated the same as that for active component servicemembers, but reservist cases may take longer due, in part, to the challenge of obtaining complete personnel and medical documents. For example, reservists may have more difficulty in obtaining a required commander letter—a key document that describes the servicemember’s duties and how his or her medical conditions affect performance of those duties—than active-duty servicemembers because reservists’ command structure is more dynamic and the appropriate commander may be difficult to track down. In addition, many reservists receive care from non-Army physicians as opposed to receiving care at a military treatment facility. According to Army officials, medical documentation provided by non-Army physicians is more likely to contain insufficient information, resulting in delayed case processing. One indicator of the inadequacy of documentation prepared by non-Army physicians is the number of cases received by the PEB that get returned to the MEB for additional information. In 2007, about 30 percent of reservist cases were returned because of incomplete information compared with about 15 percent for active-duty servicemember cases. As of June 2008, the Army had not taken steps to identify potential actions that might mitigate this disparity. The Army has taken steps to streamline processes to help servicemembers better navigate the disability evaluation system. For example, in March 2008, the Army reduced the number of documents that could be used to complete the PEB from 38 to 19. Also, the template for the commander’s letter became more detailed, which obviated the need for submitting some forms, including servicemembers’ recent physical fitness examination results. In addition, the Army is developing a computer system to automate the MEB process by replacing paper case files with electronic files, thereby reducing case processing time and improving case tracking. The computer system is being piloted at one facility, and if the pilot is successful, the intent is to replicate it throughout the Army by January 2009. According to the AMAP, this automation project was to be completed by January 2008, but the project was delayed and just began in April 2008. According to Army officials, the late start was due, in part, to delays in finding a cost-effective technology solution, receiving the necessary Army approvals, and satisfying contracting procedures. While the Army has taken steps to address the shortages of legal personnel dedicated to the disability evaluation process, the Army’s outreach efforts may be insufficient to ensure that servicemembers are aware of their rights and the availability of legal counsel earlier in the process. Army policy is to advise servicemembers of the availability of legal counsel at the initial briefing when a servicemember begins the disability evaluation process, and to assign servicemembers to attorneys once their case goes before a formal PEB. However, the Army does not have a policy that legal staff attend the initial briefing because, in part, 30 of 35 locations currently do not have on-site legal staff dedicated to the disability process. To address this gap, Army legal personnel who work specifically on disability evaluation cases have begun conducting additional outreach to servicemembers earlier in the process, including traveling in some cases to facilities that lack such personnel. However, due to limited resources, many facilities do not receive this outreach, while others receive it infrequently. Since the Army hired additional staff in June 2007, 10 of the 30 facilities that do not have on-site legal staff dedicated to disability evaluation counsel, have received outreach during the MEB process as of June 2008. Even at the 3 sites we visited that had dedicated legal staff, many servicemembers undergoing disability evaluation with whom we spoke were not aware of the availability of the legal staff or the need for legal counseling. According to an Army official involved in legal staffing, if attorneys counseled servicemembers early in the medical board process, servicemembers could have a better understanding of what steps to take to protect their rights. In addition, according to this same official, early outreach could help to make the disability evaluation process proceed faster if servicemembers receive counsel on how to prepare in advance for the many steps in the process. In addition to the staffing and outreach initiatives to bolster legal support to servicemembers, the Army has made other supports available to help educate servicemembers about the overall process, but these supports are not without limitations. For example, although the Army standardized the initial briefing that we previously mentioned, several locations do not use the standardized version when briefing servicemembers. Of the 9 facilities we visited or contacted by telephone, staff at 3 of these locations used a different version when briefing servicemembers and did not note the availability of legal counsel for servicemembers during the briefing. In addition, the Army created a Web site for each servicemember to track his or her progress through the MEB and PEB processes, and created a link to information about legal support. However, according to Army officials and some servicemembers we spoke with, many servicemembers do not access the Web site. Of the servicemembers we spoke with who had accessed the Web site, many found it limited in answering their questions and at times out of date. Finally, the Army developed and issued a handbook on the disability evaluation process to help educate servicemembers about the process. Although the handbook can be a helpful tool in describing a complex process, many servicemembers we spoke with did not recall receiving or reading the handbook, possibly due to the nature of their conditions and medications, while some found the disability process confusing despite having reviewed the handbook. Two Army locations we visited provided an additional support that may be successful at reducing servicemembers’ confusion with the process, but this support was not offered at other locations we visited. Servicemembers we spoke with at each facility we visited said they found the medical language in the written summary of their medical conditions confusing. At the two locations we previously mentioned, servicemembers were afforded the opportunity to have the written summary explained to them by the physician in order to increase transparency and improve servicemember understanding and acceptance of the disability process. According to Army officials at these facilities, the servicemembers who receive the explanation find the medical board assessment less confusing. Officials at one of these facilities also noted that servicemembers who do not understand the written summary of their medical conditions are more likely to become dissatisfied with the disability evaluation process, and that the process can be delayed by late identification of additional medical conditions. Despite its potential benefits, in part due to staffing and resource constraints, the Army has not adopted this practice at all locations. While anecdotal evidence of servicemember confusion with the process is not evidence of widespread or worsening problems, the Army has struggled to assess servicemembers’ satisfaction with the disability evaluation process to help demonstrate the impact of its efforts over time. To gauge servicemembers’ satisfaction with the process, in June 2007, the Army added questions to a survey that targets servicemembers at various stages in the Army’s Warrior Transition Units. In part because of the survey’s timing and target respondents, the Army experienced low response rates for the added questions and, therefore, was unable to evaluate the impact of changes to the disability evaluation process. As part of the AMAP, in April 2007, the Army set a goal to improve the survey by September 2007, but delays in developing survey questions postponed deployment until July 2008. The new survey has two sections relevant to the disability evaluation process—one for the MEB and another for the PEB parts of the disability evaluation process. However, the surveys will continue to target servicemembers in Warrior Transition Units. Because many servicemembers undergoing disability evaluation are not in such units, survey responses will not necessarily represent the population undergoing disability evaluation. In addition, the Army may be challenged to identify weaknesses in some supports due to the limited nature of some survey questions. For example, according to an Army official involved in legal staffing, the new surveys do not ask servicemembers questions regarding the effectiveness of legal outreach and support during the MEB phase of the process. Without a feedback mechanism, such as a valid survey, the Army will be challenged to evaluate the effectiveness of planned increases in legal support and current outreach to servicemembers. DOD and VA have made progress in developing and piloting a streamlined disability evaluation process, but they have much work to do in key areas. Gaps include a lack of clearly identified criteria for determining whether the pilot has been successful and should be implemented on a large scale. Also, although DOD and VA have begun surveying servicemembers in the pilot, they have not yet completed development of surveys to collect customer satisfaction data from nonpilot servicemembers for comparison, or from DOD and VA staff conducting the pilot. Furthermore, DOD and VA have yet to resolve several challenges to expanding the joint process on a large scale if the pilot is deemed successful. These challenges include ensuring that DOD and VA have addressed staffing needs, determining logistical arrangements associated with operating the pilot at additional facilities, and sustaining top agency management focus on the pilot. Since the pilot has been under way since November 2007, DOD and VA have been focused on collecting detailed data on pilot performance. As we noted in our February 2008 testimony, DOD and VA moved quickly and collaboratively to design and implement the pilot, and have been working toward a Senior Oversight Committee review of the pilot’s progress. DOD and VA officials expect this review to lead to a decision of whether to expand the pilot to a few facilities beyond the current 3 facilities. According to DOD and VA officials, adding a small number of facilities to the pilot would allow for collection of additional information on pilot performance and to test pilot procedures in different locations with varying servicemember populations and disability evaluation resources. To this end, DOD and VA are in the process of collecting data to compare potential sites for an initial pilot expansion on the basis of several factors. After this initial expansion, the agencies anticipate a decision regarding the worthiness of the pilot process and whether it should become their standard disability evaluation process. DOD is required to provide the Congress with a final assessment of the pilot 3 months after its scheduled November 2008 end. DOD and VA have established methods for measuring certain key aspects of the pilot, such as timeliness of decisions and appeal rates, and have developed a comprehensive mechanism to track these and other measures. The mechanism enables pilot planners to assess their work relative to numerous goals that fall under the following six initiatives: (1) improve disability evaluation policy and procedures, (2) improve servicemember and stakeholder satisfaction with the process, (3) establish an awareness and training program for evaluation system stakeholders, (4) expand the pilot process, (5) meet pilot and nonpilot milestones, and (6) ensure funding to support development of an integrated system. For example, under the first initiative, pilot planners intend to compare various case processing timeliness measures against standards. These metrics include the percentage of MEB cases completed within 80 days, and the percentage of VA benefits letters issued within 30 days of separating from the military. By applying agreed-upon weights to these and other measures, pilot planners will assess whether they have met, partially met, or not met each objective, and signal the overall status of the workgroup’s efforts. While DOD and VA have developed this mechanism to help measure pilot performance, they have yet to finalize criteria for applying those metrics to determine whether the pilot is worthy of eventual full-scale implementation. Pilot planners have indicated that the timeliness of decisions will be a factor in evaluating the effectiveness of the pilot, but there are several potential measures of timeliness. Furthermore, while they are collecting timeliness data from each service to draw comparisons to the pilot process, it is unknown whether comparisons will be made in aggregate; by service; or by subgroups, such as Army reservists. Finally, DOD and VA have not yet decided how much improvement must be demonstrated by the pilot as indicated by any such comparisons. One set of metrics to be used for evaluating the pilot is servicemember and stakeholder satisfaction, and DOD and VA are in the process of developing and administering several surveys to measure their satisfaction; however, much work remains. Pilot planners intend to survey the following four groups of people: (1) all pilot participants; (2) a sample of servicemembers in the disability evaluation process outside of the pilot; (3) a family member of each pilot participant; and (4) select stakeholders involved in the pilot process, such as board liasisons and VA nonclinical case managers. Surveying of the first group—pilot participants—will be administered after three phases in the process: the MEB; the PEB; and transition, including discharge from the service. For example, the MEB phase survey asks pilot participants about their satisfaction with the assistance provided by DOD and VA liaisons, the thoroughness of their physical examination, and the fairness of the board’s decision. Although pilot planners have begun to survey pilot participants, it is unclear when they will be able to incorporate survey results of this group and other groups into their decision making. Survey data from the pilot’s first year is expected to be available for analysis in December 2008. However, surveys of pilot participants only began in May 2008 and, according to pilot planners, it is unlikely that DOD and VA will have sufficient responses in December, particularly from servicemembers who have gone through the later pilot phases, to assess satisfaction with the pilot process. Pilot planners estimate that about 100 servicemembers will have completed the PEB under the pilot by November 2008, but they are unsure if this number will be sufficient for evaluation purposes. Relatedly, pilot planners intend to compare the survey results for pilot participants against survey results for an appropriate group of servicemembers who have undergone military disability evaluation outside of the pilot. Such survey data would help DOD and VA assess whether the pilot is improving servicemembers’ satisfaction with their experiences with disability evaluations. However, this survey has not yet been deployed, and it is unclear when DOD and VA will have sufficient responses from servicemembers outside of the pilot to help measure any improvements in servicemember satisfaction under the pilot process. Pilot planners face further challenges associated with analyzing the survey results. According to DOD officials, servicemembers outside of the pilot who are to be surveyed will be selected to reflect a proportional representation across certain characteristics, such as branch of military service. However, as of the time of our review, these officials had not yet decided how they will select a comparison group with similar demographic and disability profiles as pilot participants. Furthermore, pilot planners intend to survey a family member of each pilot participant, but they do not have a clear plan for assessing the results. For example, they do not plan to survey a similar group of nonpilot family members, so the usefulness of family member survey results may be limited. Regarding these surveys, at the time of our review, pilot planners had not sufficiently coordinated their design or development with other surveys of wounded, ill, and injured servicemembers and their families. Although DOD officials noted that they took steps to coordinate with other survey efforts under the Senior Oversight Committee, coordination has not occurred with service-specific survey efforts. For example, Army officials told us that coordination has not occurred with their initiative to survey servicemembers in the Army’s disability evaluation process. Without adequate coordination, these separate efforts could lead to inefficiencies in collecting data from servicemembers and could cause survey fatigue and potentially jeopardize response rates if people are asked to participate in several surveys. Finally, DOD and VA will be challenged to ensure the quality and consistency of DOD fitness decisions and VA rating decisions prior to determining the worthiness of the pilot concept. VA plans to review all of its decisions on pilot cases as part of its Systematic Technical Accuracy Review. Such reviews have not yet begun because, according to VA, it has received few cases requiring disability ratings. VA expects to begin conducting such reviews in October 2008 when it anticipates having a sufficient number of cases for statistical analysis. Under the pilot design, the task of performing quality reviews of PEB fitness decisions was given to DOD’s Disability Advisory Council. As of July 2008, the process of sampling decisions for review, the criteria for assessing decisions, and the mechanism for providing feedback to the PEBs have not been determined. In terms of consistency, the agencies did not have plans yet to ensure consistency of fitness decisions within each service or, ultimately, of VA rating decisions across VA benefits offices. As the pilot progresses, DOD and VA are collecting information that could be used to identify resource needs and implementation challenges if they decide to implement the pilot process on a large scale. For example, DOD and VA are tracking pilot operational issues, for use in refining pilot procedures and addressing operational problems, as well as identifying challenges associated with implementation at additional facilities. DOD and VA have conducted pilot review sessions with stakeholders to discuss implementation challenges. In addition, VA has been keeping a log of pilot implementation issues and the status of their resolution. For example, VA staff at pilot sites reported difficulties in ensuring that servicemembers report to scheduled physical examinations. An update was issued to the pilot’s guidance requiring that servicemembers be present at their assigned pilot medical facility for a long enough period to ensure their presence for examinations and MEB processing. Also, VA staff have identified problems with obtaining complete service medical records in some cases, leading to another update to the pilot guidance. Other key implementation challenges identified by DOD and VA officials would be to adjust logistical arrangements to accommodate facility differences and to potentially include other servicemembers in the pilot process. For example, different facilities may require different procedures for performing the single physical examination. While the 3 original pilot locations have physical examinations performed at a nearby VA medical center, some military medical facilities are not near a VA medical center and, therefore, lack comparable access. At such facilities, examinations may need to be conducted by VA contract physicians, DOD physicians, or private physicians under DOD’s health insurance program. DOD’s pilot guidance allows for such arrangements, provided that examinations are conducted according to VA criteria. According to DOD and VA officials, one reason for an initial pilot expansion beyond the original 3 facilities is to test alternative arrangements where VA examiners are not as readily available. In addition, the pilot process does not currently include servicemembers who are being reexamined after being placed on temporary disability retirement by a PEB. According to pilot planners, inclusion of this group of servicemembers in the pilot would require adjustments to pilot guidance and procedures. Another significant implementation issue that has yet to be resolved is estimating the additional resources, particularly DOD and VA case management staff, required to ensure that the process flows smoothly at additional facilities. VA officials stated that they are tracking VA resource needs at the pilot facilities, particularly VA nonclinical case managers, and have estimated VA resource needs for potential large-scale pilot expansion. In addition to seeking additional VA nonclinical case managers, VA is considering assigning VA service representatives to pilot sites. According to VA officials, this assignment is because VA nonclinical case managers have some claims processing functions in the pilot, such as creating claim folders and scheduling physical examinations, which are normally the responsibilities of VA service representatives. Also, according to pilot planners, they have formed a working group to estimate financial resources needed for large-scale implementation of a joint disability evaluation process, including administrative costs and any increases in DOD and VA disability benefit payments. According to VA officials, they are in the process of developing these estimates to help prepare VA’s 2010 budget. In addition to anticipating challenges and resource requirements, successful implementation of the pilot process on a larger scale would require sustained management focus to help ensure that needed resources are identified, implementation challenges are overcome, and focus on achieving intended results is maintained. Currently, that focus is provided by the Senior Oversight Committee, which was scheduled to last 1 year through May 2008, but was extended to January 2009. Anticipating the committee’s dissolution, DOD and VA have been planning to move its functions, including operation of the disability evaluation pilot, to the DOD-VA Joint Executive Council. According to DOD and VA officials, they are working to incorporate the pilot into the Joint Executive Council’s strategic plan—which is currently silent on the pilot and on how the functions of the Senior Oversight Committee would be transferred to the council. According to DOD and VA officials, the next strategic plan, scheduled for approval in October 2008, is expected to include the disability evaluation pilot. In the meantime, concerns have been raised about whether the Joint Executive Council will be able to provide as much management attention as it currently provides. According to DOD and VA officials, the Senior Oversight Committee differs from the Joint Executive Council because the former has full-time staff detailed from DOD and VA. Furthermore, decisions have not been made regarding whether staff currently working under the Senior Oversight Committee will continue their roles and responsibilities of overseeing the pilot under the Joint Executive Council, and for how long. Without knowledgeable staff and continued management focus, especially during the critical junctures leading to and potentially including phased in, large-scale implementation, the pilot may lack sufficient oversight and cross-agency coordination, which raises risks to the sound evaluation of the pilot, and successful implementation of potential widespread changes to the disability evaluation process. For those servicemembers whose military service was cut short due to illness or injury, DOD’s disability evaluation is an important issue because it affects their employment and, in many cases, whether they will receive DOD benefits such as retirement pay and health care coverage. Despite several initiatives, many servicemembers remain confused by the military’s process for making these important determinations and are unaware of the potential benefit of consulting with an attorney during the process. Once they become veterans, VA’s disability evaluation also affects cash compensation and other disability benefits that they may receive. Going through two complex disability evaluation processes can be difficult and frustrating for servicemembers and veterans. Delayed decisions and confusing policies have eroded the credibility of the system. The Army is struggling to develop effective strategies to address growing and shifting demand for disability evaluations and to meet timeliness goals—overall, but especially for reservists. Even if the Army is able to match the supply of medical board staff to the changing demand for its services, without a strategy to address the particular challenges of documenting reservists’ cases, the Army will not be able to evaluate their conditions in a timely way. In addition, without a concerted approach to ensuring transparency throughout the process, especially regarding the medical basis of the disability decision and the availability of legal support, servicemembers will remain confused by and dissatisfied with the process. Surveying servicemembers who have gone through the Army’s disability evaluation process will help the Army track whether its hiring efforts and other initiatives are benefiting servicemembers and addressing their confusion. However, even if the Army is able to overcome challenges and demonstrate improvements in the evaluation process, its efforts will not address the systemic problem of having two consecutive evaluation processes that can lead to different outcomes. To address identified systemic problems, DOD and VA are collaborating on a disability evaluation pilot that has potential for reducing the time it takes to receive a decision from both agencies, improving the consistency of evaluations for individual conditions, and simplifying the overall process for servicemembers and veterans. Expanding the pilot to a few more locations may be prudent as a way of testing the process under different conditions, such as at locations lacking easy access to a VA medical facility for physical examinations. However, a much larger expansion would entail some risks; planners should be transparent about, and prepared for, such risks. Without finalizing criteria and related analysis plans well before assessing whether the pilot is successful and merits larger expansion, DOD and VA may ultimately make significant implementation decisions without sufficient data on whether the pilot is producing the desired results. Criteria could include comparative metrics that help the agencies measure the pilot’s performance against the current process, including whether decision timeliness and servicemember and veteran satisfaction has improved. Even if the pilot is proven successful, DOD and VA’s ability to implement significant changes on a large scale is unknown. Adjusting pilot resource needs and logistical arrangements could prove challenging as a revised process is rolled out to more DOD and VA facilities. Without sufficient planning for and focused management attention on widespread implementation of a joint process that would dramatically change business processes across many locations at both agencies, DOD and VA could jeopardize the systems’ successful transformation, and potentially exacerbate confusion and frustration among servicemembers in the process. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following actions: To help reduce delays in MEB case processing due to shortages of board physicians and caseload surges at particular treatment facilities, the Army should consider developing more mobile units of medical board staff, including physicians who could be flexibly deployed to treatment facilities where servicemembers are experiencing case processing delays. To address the disparity in timeliness of MEB and PEB case processing for reservists compared with active-duty servicemembers, the Army should explore approaches to improving reservists’ case development, such as ensuring adequate documentation of their military duties and medical conditions. To further reduce servicemember confusion about and distrust of the disability evaluation process, the Army should explore more widespread implementation of promising practices for: ensuring that servicemembers understand their rights to and are aware of the availability of legal counsel during the disability evaluation process, such as having legal counsel present at in-briefings where feasible; and improving each servicemember’s understanding and acceptance of the written summary of medical conditions that underlies his or her disability evaluation, such as affording servicemembers an opportunity to review the summary with the physician who prepared it before the summary is finalized. To help the Army assess the effectiveness of its outreach and supports available to servicemembers undergoing disability evaluations, it should administer existing surveys to a representative sample of servicemembers undergoing the MEB and PEB processes, and consider developing additional questions to better assess outreach and support provided by Army legal staff throughout the process. We also recommend that the Secretary of Defense and the Secretary of Veterans Affairs take the following actions: To ensure that the evaluation of the DOD-VA pilot process is sound, and that any decisions on large-scale implementation of it are well-founded, DOD and VA should develop complete plans to evaluate the pilot’s success and guide potential large-scale expansion decisions. Such plans should include criteria for determining how much improvement should be achieved under the pilot on various performance measures—such as decision timeliness and servicemember satisfaction—to merit implementing the joint process throughout DOD and VA. To ensure that pilot evaluation and any large-scale implementation of the joint disability process is done successfully, DOD and VA should sustain collaborative executive focus on the pilot and retain knowledgeable staff by, for example, continuing the agencies’ joint Senior Oversight Committee or transferring the responsibilities to an equally staffed structure with the same level of executive commitment. We provided a draft of this report to DOD and VA for review and comments. The agencies provided written comments, which are reproduced in appendixes III and IV. DOD and VA generally agreed with our recommendations. With respect to the Army’s disability evaluation process, DOD agreed with all of the recommendations, but partially agreed with one of them. DOD also commented on relevant steps that the Army is taking on each recommendation, as follows: In response to our recommendation that the Army consider developing more mobile units of medical board staff that could be flexibly deployed where servicemembers are experiencing case processing delays, DOD agreed and stated that it planned to conduct a study on the effectiveness of a mobile MEB team by January 1, 2009. In response to our recommendation that the Army explore approaches to improving reservists’ case development to address the disparity in the timeliness of MEB and PEB case processing for reservist servicemembers versus active-duty servicemembers, DOD agreed and stated that the Army is attempting to automate the MEB process for all of its servicemembers, but indicated that reservists typically have unique challenges in obtaining necessary information. As we noted in our report, DOD may need a broad strategy to address these challenges for reservists and, therefore, should explore approaches to improving reservists’ case development. In response to our recommendation that the Army explore more widespread implementation of promising practices to ensure that servicemembers understand their rights to and are aware of the availability of legal counsel during the disability evaluation process, DOD partially agreed. They agency noted that having legal counsel present at in- briefings could diminish their capacity to provide actual counsel to other servicemembers who are further along in the process, and that the in- briefing forum does not lend itself to a confidential exchange of information between servicemembers and legal counsel. DOD noted alternative methods for raising servicemembers’ awareness of their legal rights and available services, including screening a relevant video that the Army is in the process of developing. Alternative methods could successfully address our recommendation if they are widely implemented. In response to our recommendation that the Army explore more widespread implementation of promising practices to improve servicemembers’ understanding and acceptance of the written summary of their medical conditions, DOD agreed. The agency mentioned multiple emerging best practices—such as having the servicemember present when the physician dictates the summary to enable timely discussion—that, if widely implemented, could help ensure that all servicemembers benefit from them. In response to our recommendation that the Army administer existing satisfaction surveys to a representative sample of servicemembers undergoing MEBs and PEBs and consider developing additional questions to better assess legal support, DOD agreed and indicated that the Army was in the process of launching a modified survey. However, DOD’s comments indicated that the new survey will not be representative of servicemembers undergoing disability evaluation by the Army because the survey will exclude servicemembers who are undergoing disability evaluation, but are not assigned to a Warrior Transition Unit. Because many servicemembers—particularly reservists—are not assigned to a Warrior Transition Unit, excluding them from the survey will generate information that is not representative of all servicemembers undergoing disability evaluation by the Army and, therefore, may yield skewed data. With respect to DOD and VA’s efforts to pilot a joint disability evaluation system, the agencies agreed with our recommendations and provided additional comments, as follows: In response to our recommendation that DOD and VA develop criteria to inform decision making on potential expansion of the pilot process, DOD and VA agreed. They stated that their balanced scorecard—the mechanism that they are using to track pilot performance—will help them accomplish this objective. Although a mechanism like the balanced scorecard is useful for tracking certain key measures, at the time of our review, the balanced scorecard did not identify minimum levels of performance improvement that should be achieved for certain metrics before the pilot is considered successful and merits widespread implementation. In response to our recommendation that DOD and VA sustain collaborative executive focus on the pilot and retain knowledgeable staff, DOD and VA agreed. VA officials have reported that, with DOD, they have developed a legislative proposal for a new coordinating organization, the Senior Executive Oversight Committee, that would replace both the Senior Oversight Committee and Joint Executive Council. To the extent that oversight of the pilot transitions to a new organization, DOD and VA will need to guard against the potential loss of continuity in pilot monitoring activities, such as planning, resource allocation, and evaluation. As part of their sustained executive focus, DOD and VA leadership should, to the extent possible, retain staff who are knowledgeable about the history and management of the pilot to provide continuity to pilot management and oversight. Without such continuity and sustained focus, sound implementation and assessment of the pilot may be jeopardized. We are sending copies of this report to relevant congressional committees, the Secretary of Veterans Affairs, the Secretary of Defense, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 or [email protected] if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of our review were to examine (1) recent actions taken by the Department of the Army to help ill and injured servicemembers navigate its disability evaluation process and (2) the status, plans, and challenges of the Department of Defense (DOD) and the Department of Veterans Affairs’ (VA) efforts to pilot and implement a joint disability evaluation system. To address the first objective, we analyzed relevant documents, including Army forms, Army policy memorandums, relevant DOD directives, and a related Army Inspector General report. We reviewed staffing and case processing data related to disability evaluation initiatives established in the Army Medical Action Plan (AMAP). We did not verify the accuracy of these data. However, we interviewed agency officials knowledgeable about the data, and we determined that they were sufficiently reliable for the purposes of this report. Out of the Army’s 35 treatment facilities, we visited 4—Walter Reed Army Medical Center (Washington, D.C.), Brooke Army Medical Center (Fort Sam Houston, Texas), Carl R. Darnell Army Medical Center (Fort Hood, Texas), and Madigan Army Medical Center (Fort Lewis, Washington)—that are near the 3 sites where the Army conducts physical evaluation boards (PEB) to talk with Army officials about efforts to improve the disability evaluation process for servicemembers, and to obtain views from servicemembers about how these efforts are affecting them. To help assess legal outreach and other supports to servicemembers, we also spoke with officials from 5 treatment facilities that are not located near any of the Army’s PEB sites. These 5 facilities were selected on the basis of varying size (small, medium, and large) and representation from the different geographic areas of the Army’s medical organization. These facilities were Bassett Army Community Hospital (Fort Wainwright, Alaska), Dwight D. Eisenhower Army Medical Center (Fort Gordon, Georgia), Keller Army Community Hospital (West Point, New York), Munson Army Health Center (Fort Leavenworth, Kansas), and Tripler Army Medical Center (Honolulu, Hawaii). In addition, we spoke with officials from the Army’s Community Based Health Care Organization (CBHCO) system and visited a CBHCO location in Massachusetts to learn about issues that concern reservists entering the disability evaluation process from the CBHCO system. To address DOD and VA efforts to pilot a joint disability evaluation system, we reviewed these agencies’ pilot guidance documents, and visited the 3 original pilot facilities—Walter Reed Army Medical Center (Washington, D.C.), National Naval Medical Center (Bethesda, Maryland), and Malcolm Grow Air Force Medical Center at Andrews Air Force Base, Maryland. We spoke with DOD and VA officials to learn about the status, plans, and challenges related to evaluating the disability evaluation pilot and to potentially implementing a joint system. Our interviews with DOD officials included officials of the Office of the Under Secretary of Defense (Personnel and Readiness) and its pilot contractor, Booz Allen Hamilton; officials of the services’ Surgeon General offices and officials responsible for their disability evaluation processes; and officials of the original pilot facilities, including medical evaluation board (MEB) and PEB members, and board liaisons. We also discussed pilot surveys with officials of the Defense Manpower Data Center, which is developing and administering the surveys to help evaluate the pilot. In VA, we spoke with officials of the Compensation and Pension Service, Veterans Benefits Administration, which is responsible for VA’s pilot activities. To analyze pilot implementation issues, we reviewed records from DOD and VA pilot stakeholder meetings—including pilot review, expansion planning, and stakeholder training sessions—and reviewed the Wounded, Ill, and Injured Senior Oversight Committee records related to the pilot. Furthermore, we reviewed weekly reports that included the number of cases by phase of the process in the pilot. We conducted this review from July 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The military disability evaluation process involves two phases: the MEB and the PEB. There are a number of steps in the process and several factors that play a role in the decisions that are made at each step (see fig. 5 and the text that follows the figure). There are four possible outcomes in the disability evaluation process. A servicemember can be found fit for duty, separated from the service without benefits—servicemembers whose disabilities were incurred while not on duty or as a result of intentional misconduct are discharged from the service without disability benefits, separated from the service with lump sum disability severance pay, or retired from the service with permanent monthly disability benefits or placed on the temporary disability retired list (TDRL). The disability evaluation process begins at a military treatment location, when a physician identifies a condition that may interfere with a servicemember’s ability to perform his or her duties. On the basis of a physical examination, and specialty consultations if necessary, the physician prepares a narrative summary detailing the servicemember’s injury or conditions. This evaluation is used to determine if the servicemember meets the military’s retention standards, according to each service’s regulations. This process is referred to as “the MEB.” Servicemembers who meet retention standards are returned to duty, and those who do not are referred to the PEB. The PEB is responsible for determining whether servicemembers have lost the ability to perform their assigned military duties due to injury or illness, which is referred to as being “unfit for duty.” If the member is found unfit, the PEB must then determine whether the condition was incurred or permanently aggravated as a result of military service. While the composition of the PEB varies by service, it typically comprises one or more physicians and one or more line officers. Each of the services conducts this process for its servicemembers. The Army has three PEBs located at Fort Sam Houston, Texas; Walter Reed Army Medical Center in Washington, D.C.; and Fort Lewis, Washington. The Navy has one PEB located at the Washington Navy Yard in Washington, D.C. The Air Force has one PEB located in San Antonio, Texas. The first step in the PEB process is the informal PEB—an administrative review of the case file without the presence of the servicemember. The PEB makes the following findings and recommendations regarding possible entitlement for disability benefits: Fitness for duty: The PEB determines whether the servicemember “is unable to reasonably perform the duties of his or her office, grade, rank, or rating,” taking into consideration the requirements of a member’s current specialty. Fitness determinations are made on each medical condition presented. Only those medical conditions that result in the finding of “unfit for continued military service” will potentially be compensated by DOD. Servicemembers found fit must return to duty. Compensability: The PEB determines if the servicemember’s injuries or conditions are compensable, considering whether they existed prior to service (referred to as “having a preexisting condition”) and whether they were incurred or permanently aggravated in the line of duty. Servicemembers found unfit with noncompensable conditions are separated without disability benefits. Disability rating: When the PEB finds a servicemember unfit and his or her disabilities are compensable, it applies the medical criteria defined in the Veterans Administration Schedule for Rating Disabilities to assign a disability rating to each compensable condition. The PEB then determines (or calculates) the servicemember’s overall degree of service-connected disability. Disability ratings range from 0 (least severe) to 100 percent (most severe) in increments of 10 percent. Depending on the overall disability rating and number of years of active-duty or equivalent service, the servicemember found unfit with compensable conditions is entitled to either monthly disability retirement benefits or lump sum disability severance pay. In disability retirement cases, the PEB considers the stability of the servicemember’s condition. Unstable conditions are those for which the severity might change, resulting in higher or lower disability ratings. Servicemembers with unstable conditions are placed on TDRL for periodic PEB reevaluation at least every 18 months. While on TDRL, members receive monthly retirement benefits. When members on TDRL are determined fit for duty, they may choose to return to duty or leave the military at that time. Members who continue to be determined unfit for duty after 5 years on TDRL are separated from the military with monthly retirement benefits, discharged with severance pay, or discharged without benefits, depending on their condition and years of service. Servicemembers have the opportunity to review the informal PEB’s findings and may request a formal hearing with the PEB; however, only those found unfit for duty are guaranteed a formal hearing. The formal PEB conducts a de novo review of referred cases and renders its own decisions based on the evidence. At the formal PEB hearing, servicemembers can appear before the board, put forth evidence, introduce and question witnesses, and have legal counsel help prepare their cases and represent them. If servicemembers disagree with the formal PEB’s findings and recommendations, they can, under certain conditions, appeal to the reviewing authority of the PEB. Once the servicemember either agrees with the PEB’s findings and recommendations or exhausts all available appeals, the reviewing authority issues a final disability determination concerning fitness for duty, disability rating, and entitlement to benefits. Michele Grgich (Assistant Director), Joel Green (Analyst-in-Charge), Bryan Rogowski, Barbara Steel-Lowney, and Greg Whitney made significant contributions to this report. Walter Vance and Cindy Gilbert provided assistance with research methodology and data analysis. Bonnie Anderson, Rebecca Beale, Elizabeth Curda, and Anna Kelley provided subject matter expertise. Susannah Compton helped draft the report, and Mimi Nguyen provided assistance with graphics. Roger Thomas provided legal counsel.
|
In February 2007, a series of articles in The Washington Post about conditions at Walter Reed Army Medical Center highlighted problems in the military's disability evaluation system. Subsequently, the Department of the Army, Department of Defense (DOD), and Department of Veterans Affairs (VA) undertook initiatives to address concerns with the disability evaluation process. In 2007, the Army took steps to streamline its process, and DOD and VA began piloting a joint evaluation system to address systemic concerns about timeliness and the potential inefficiency of having separate disability evaluation systems. GAO was asked to examine (1) recent actions by the Army to help servicemembers navigate its disability evaluation process and (2) the status, plans, and challenges of DOD and VA's efforts to pilot and implement a joint disability evaluation system. GAO interviewed Army, DOD, and VA officials; visited Army treatment facilities; and reviewed data from these sources. The Army has taken a number of steps to help servicemembers navigate the disability evaluation process through additional support mechanisms and streamlining efforts, but faces challenges in meeting internal goals and demonstrating impact. Most significantly, the Army has begun hiring more staff to facilitate the process for servicemembers, such as legal personnel, and setting staffing goals for key positions, such as for board liaisons and physicians. However, the Army has not met its internal staffing goals for board liaisons and physicians, and continues to face shortages in legal personnel. The Army has also struggled to meet timeliness goals for case processing and has even experienced negative trends over the last year, despite streamlining initiatives. Furthermore, the Army faces particular challenges in meeting timeliness goals for completing reservists' evaluations, due in part to the challenge of obtaining complete personnel and medical documents from nonmilitary sources. Besides staffing initiatives, the Army has also taken steps to help servicemembers better understand and navigate the process. However, we found that these efforts varied by location, and that many servicemembers we spoke with were unaware of the availability of expert legal counsel. To increase transparency of the disability process, one location we visited afforded servicemembers the opportunity to have the written summary of their medical conditions explained to them, but not all Army locations have adopted this practice. In general, the Army faces challenges in demonstrating that its efforts to date have had an overall positive impact on servicemembers' satisfaction, because it has not implemented a survey that adequately targets and queries servicemembers who are undergoing disability evaluations. Under direction from the agencies' joint Senior Oversight Committee, DOD and VA moved quickly to design and pilot a joint disability evaluation process, but gaps remain in their plans to evaluate the pilot and potentially implement a joint process on a larger scale. DOD and VA have established a comprehensive mechanism for measuring key aspects of the pilot. However, they have not yet decided on criteria for determining whether the joint process is worthy of widespread implementation. In addition, although DOD and VA are in the process of developing surveys to measure servicemember and stakeholder satisfaction, sufficient comparative data on servicemember satisfaction may not be available when the pilot is scheduled to end. DOD and VA are also in the process of tracking challenges that have arisen in implementing the pilot, but they have not yet resolved several challenges associated with expanding the joint process if the pilot is deemed successful. Such challenges include determining who will perform the single physical examination when a VA medical center is not nearby. Beyond these concerns, DOD and VA may ultimately need to prepare for challenges that come with implementing large-scale system changes--such as those envisioned by the pilot. These challenges include sustaining management attention to ensure that the changes are implemented well and are producing the intended results. However, the Senior Oversight Committee's planned January 2009 end raises questions about whether management attention will be maintained over the long term.
|
According to the Service, approximately 855,000 customers generate business mail, and over 4,600 Service employees handle that mail at 1,900 Business Mail Entry Units and 850 Detached Mail Units nationwide. In fiscal year 1998, business mail was the largest contributor to the Service’s total mail revenue and total mail-piece volume. Service-provided operational information shows that in fiscal year 1998, business mail accounted for 49 percent of the Service’s $58 billion in total mail revenue and 66 percent of the nearly 200 billion mail pieces handled by the Service. Because of the importance of business mail to its overall operations, the Service has developed a business mail plan to guide it in meeting the business mail challenges of the future. The plan is based on four strategies: Make the business mailing process as easy as possible for customers by eliminating unnecessary rules, having highly trained staff, and making maximum use of technology to achieve verification, acceptance, and payment processes. Lower the operational costs of verifying a customer’s eligibility for discounted rates by raising the customer’s preparation skills to a level that can be tested and certified. Retain and grow revenue through more work sharing incentives and removal of pointless rules. Obtain timely and accurate customer information during routine mail processing in order to use that information to better serve customers’ needs. The Service currently has about 30 different initiatives planned or in process that it believes will be helpful in achieving these four strategies. The seriousness of the control weaknesses we identified in our earlier work and the potential impact those weaknesses could have on Service revenue led us to make several recommendations to the Service in our 1996 report, which we believed were needed to improve business mail acceptance controls and minimize revenue losses. We recommended that the Service use a risk-based approach for selecting mailings to receive presort ensure that required presort verifications and supervisory reviews of those verifications are performed and documented; provide supervisors and staff with updated procedures, training, and tools; develop and use valid information for evaluating the adequacy of business mail acceptance controls, procedures, staffing, and training; and develop methodologies for measuring systemwide revenue losses. To determine the actions taken by the Service in response to our 1996 recommendations, we discussed with Service officials changes made to the Service’s business mail acceptance controls and obtained and reviewed program procedures and guidelines for the new process. We also obtained information on future plans for operation of the business mail program and programwide performance information for fiscal year 1998—the latest available fiscal year data at the time of our review. To determine whether the changes to the business mail acceptance controls were working, we obtained programwide information related to staff training and contractor surveys of business mail operations that measure various business mail performance indicators. Further, we observed business mail acceptance control procedures that were being carried out by Service employees at eight business mail facilities. We judgmentally selected the business mail facilities shown in table 1 to provide geographic dispersion for our work. The results of our work cannot be projected to the Service’s other business mail processing facilities. Although we did not fully verify the accuracy of the program information the Service provided us, we verified some information by reviewing certain presort verification records and records of supervisory reviews of acceptance procedures at the locations we visited. According to Service officials, the types of facilities we visited accounted for most of the business mail revenue received by the Service in fiscal year 1998. Also, according to Service officials, these types of facilities are where the acceptance control process makes extensive use of computers and where most business mail is processed. We did not observe any business mailings at mail facilities that used a manual acceptance control process. At the eight business mail facilities we visited, we observed Service employees performing acceptance process tasks for 40 different mailings submitted by customers. These included (1) mailings with 10,000 or fewer mail pieces, (2) mailings with more than 10,000 mail pieces, (3) mailings that received presort verifications, and (4) mailings that received Automated Barcode Evaluator testing. We selected mailings according to the number of mail pieces because the Service considers mailings with more than 10,000 mail pieces to be high-risk mailings and those with 10,000 or fewer mail pieces low-risk mailings. We selected mailings that were to receive presort verifications and Automated Barcode Evaluator testing to observe mailings that were subjected to these specialized acceptance controls. The 40 mailings we observed totaled 761,399 mail pieces and $172,361 in postage. We compared acceptance control steps taken by business mail employees on these mailings to the steps specified in Service program guidelines. We obtained explanations from the employees for their acceptance decisions and compared their responses to requirements specified in program guidelines. Our observations were not intended to determine whether business mail clerks applied appropriate Service postage rates to various types of business mail. We reviewed a number of Inspection Service audit reports for the last several years relating to Service business mail operations. We conducted our review between March and October 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General. The Service’s comments are discussed near the end of this report. In 1996, we reported that the Service’s random method of selecting business mailings for presort verifications may not result in the best use of the Service’s resources and said that the Service could better target its verification efforts based on risk by considering factors such as mailer histories and the postage value of mailings. We recommended that the Service make risk a prominent factor in deciding which mailings should receive a presort verification. The Service agreed, and with the help of a consultant, determined that mailings of more than 10,000 mail pieces posed the greatest risk to the Service of revenue loss. The Service structured its new acceptance controls so that a “one-pass” evaluation is made of mailings posing the least risk of revenue loss and a “two- pass” evaluation is made of mailings with the higher risk of revenue loss. This risk-based approach by the Service addresses the objective we intended by our 1996 recommendation in that fewer but more high-risk mailings are targeted for verification. Under the one-pass concept, low-risk mailings of 10,000 or fewer pieces would typically receive an evaluation that includes verification that the customer has Service approval to make business verification that the customer has funds on deposit in his Service postage account sufficient to cover the cost of the mailing; visual checks of the accuracy of technical requirements, such as routing labels and zip codes; verification of weight, mail-piece count, and postage accuracy; and verification of barcodes if required by Service criteria. If a one-pass mailing passes this evaluation, it is accepted and entered into the mail stream. However, the customer is required to fix any problems found during the evaluation before processing will continue. Also, if a customer leaves the business mail facility before the mailing has been physically inspected for compliance with technical requirements, such as the adequacy of routing labels and zip codes, any detected problems in these areas will cause the mailing to receive a presort verification. If the one-pass mailing passes the presort verification, it is to be accepted and entered into the mail stream. If the mailing fails the verification, the customer is to correct the problems or pay a higher postage rate before the mail is entered into the mail stream. Under the two-pass concept, high-risk mailings of more than 10,000 mail pieces receive the same initial evaluation as one-pass mailings. However, a presort verification will also be performed if the computer selects the mailing for such testing. The frequency of selection for presort verification testing depends upon whether the customer has consistently submitted well-prepared mailings. If the two-pass mailing passes the presort verification, it is to be accepted and entered into the mail stream. If the mailing fails the verification, the customer is to correct the problems or pay a higher postage rate before the mail is entered into the mail stream. Data obtained from the Service showed the following level of one-pass and two-pass mailing activity in fiscal year 1998. In 1996, we reported that 40 percent of the required presort verifications of business mailings that we reviewed were not performed and that many rejected mailings were resubmitted and accepted into the mail stream without proper corrections or postage. We recommended that the Service establish procedures to ensure that all required presort verifications and all reexaminations of failed mailings are performed and documented. In our most recent review, we found that the Service changed its requirements for presort verifications and incorporated the changes into a computer-controlled process that has integrated presort verification and documentation requirements. The computer driven process for selecting high-risk mailings for presort verification, combined with limiting the authority to override the computer’s selection of a mailing for presort verification to supervisors, provides the Service greater assurance that required presort verifications are performed. However, the Service still has no systematic way of knowing whether all required verifications are performed. Business mail clerks are to begin processing a mailing by entering customer information into a computer. If the mailing is high-risk, the computer will determine whether a presort verification is required based on the customer’s prior mailing history, which is electronically accessed by the computer. As part of this process the customer’s historical success at passing presort verifications is electronically compared to Service- developed criteria that determines the required frequency for presort verifications. If a presort verification is required, a code for the results of the verification process must be input to the computer to complete processing and to update the customer’s mailing history. If a customer has consistently submitted well-prepared high-risk mailings the computer is to randomly select 1 out of every 30 mailings submitted by that customer for a presort verification. However, if for any reason the presort verification cannot be performed, a supervisor must approve the decision to override the presort verification requirement. We did not observe any supervisory overrides during our visits, but Service managers said that supervisory overrides do occur. According to these managers, a common reason for the overrides is that there is not enough time to do the presort verification and still meet tight mail processing schedules. Service managers also indicated that they monitor the reasons for supervisory overrides. In addition, if an override is approved, a computer code, which is required to be input into the computer, is to automatically schedule the customer’s next mailing for a presort verification. If a customer has a history of not submitting well-prepared mailings, the computer is to select each of the customer’s mailings for a presort verification. Service guidelines require that presort verification failures be documented with a specific code input to the computer. This code is designed to trigger the requirement for another presort verification when the failed mailing is resubmitted for acceptance processing. In addition, after the resubmitted mail is accepted, a computer code is to trigger a presort verification of the customer’s next mailing. At the eight locations that we visited, the required presort verifications were performed during our observations of the process at each location. However, we noted that in several Inspection Service audit reports, prepared in 1998 and 1999, there were indications that the required presort verifications were performed at some locations, but they were not always performed at others. These latter locations were primarily smaller facilities that handled relatively small volumes of business mail and employed one or two employees with no supervisor assigned at those locations. Nevertheless, the Service has no systematic way of ensuring that all required verifications are being performed at locations Servicewide. In 1996, we reported that business mail supervisors did less than 50 percent of the required reviews of presort verifications. We recommended that the Service establish procedures to ensure that all required supervisory reviews of presort verifications are being performed and documented. According to the Service, to strengthen the supervisory review process, in November 1996, it implemented new requirements for supervisory review of business mail acceptance procedures. Supervisors are now required to review all of the mail acceptance procedures performed by business mail clerks on four mailings per week. However, for the period we reviewed, supervisors at the eight locations we visited had not performed the four required reviews per week. Specifically, we found that the average number of reviews performed ranged from none to about three per week. Reasons provided to us by business mail facility managers and supervisors for not doing all of the required supervisor reviews included sickness and/or temporary reassignment of supervisors, not enough time, and the reviews that were done did not show any significant problems, so further reviews were not done. We brought this issue of required supervisory reviews not being done to the attention of Service headquarters officials. They stated that they consider these reviews very important quality-control checks; and on June 24, 1999, the Manager, Business Mail Acceptance, notified all business mail facility managers that required supervisory reviews are to be performed. Nevertheless, the fact that supervisors were still not doing all of the required reviews again points out the need, as we recommended in our 1996 report, for the Service to develop a process for ensuring that all required supervisory reviews are being performed. In our 1996 report, we pointed out several business mail acceptance control procedures that were not being performed as intended by Service employees. We recommended that the Service ensure that business mail control procedures are updated and business mail supervisors and staff are provided with the training and tools needed to properly verify whether business mail is eligible for postage discounts. In our latest review, we found that the Service had updated business mail acceptance control procedures, provided training designed to help ensure that acceptance work is done correctly, and makes tools available to its employees to help them determine customers’ eligibility for discounted postage rates. However, Inspection Service audits disclosed that some business mail acceptance-unit personnel needed additional training in certain aspects of the business mail acceptance process. Our 1996 report highlighted a control weakness that allowed Service employees to override the requirement to perform a presort verification on as much as 40 percent of all required presort verifications. Specifically, under the Service’s old control system, business mail clerks often would override the computer-generated requirement to perform a presort verification. According to the Service, business mail clerks often would not do the required verifications because they felt pressured to complete the acceptance process and release the mail into the mail stream to meet tight processing schedules. Managers and supervisors in the business mail units did not exercise any oversight of this practice. The Service dealt with this problem by eliminating business mail clerks’ authority to make override decisions. Now, Service procedures provide that only a supervisor can approve the override of a computer-generated notice to perform a presort verification. Further, supervisory override decisions are to be identified in monthly reports to the business mail facility manager. Additionally, we reviewed another monthly report generated at Service headquarters that showed the total number of overrides at business mail facilities nationwide. The Service official who uses this report told us that it is very helpful in preventing abuse of the override authority. She provided us with a report for fiscal year 1999 that showed 16,702 approved overrides nationwide out of 203,595 presort verifications that were required during that fiscal year. Thus, overrides averaged about 8 percent for fiscal year 1999 compared with the average of 40 percent that we reported in 1996. Service officials said that to help control overrides, they watch for individual units with sudden increases in the number of overrides then determine and evaluate the causes of the overrides. A Service official stated that sudden increases in overrides sometimes happen because a new employee may not be following correct procedures. Other causes can be unusual operational problems, such as sudden illnesses of business mail clerks or supervisors. Our 1996 report noted that the Service had an initial training course for new business mail clerks assigned to business mail acceptance units but had no specialized training for supervisors. Current Service guidelines call for managers, supervisors, specialists, and analysts to be provided 16 hours of management skills training, 16 hours of technical training, and 8 hours of interpersonal training per year. In addition to the initial training course for new business mail clerks mentioned above, all business mail clerks are to receive 16 hours of technical training and 8 hours of interpersonal training per year. The Service reportedly tracks the technical and interpersonal training completed by business mail acceptance-unit employees and provides individual, business mail acceptance-unit, and postal district incentive awards for completion of the training. Table 4 shows the percentage of business mail acceptance-unit employees who, according to the Service, completed their fiscal year 1999 training by June 30, 1999. Notwithstanding this information on training the Service provided to business mail clerks, managers, and supervisors in fiscal year 1999, several 1998 Inspection Service reports pointed out that some business mail acceptance-unit employees needed additional training in various aspects of the business mail acceptance process. Again, as was the case with its finding that the required verifications were not always performed, the Inspection Service findings about the need for additional training primarily involved employees at smaller postal locations (in terms of the volume of business mail processed). Nevertheless, it is uncertain whether the Service’s current training program is sufficient or whether business mail acceptance-unit employees need additional training because Servicewide information related to additional training needs of business mail acceptance-unit employees is not available. Our 1996 report noted that although the Service had granted postage discounts to business customers since 1988, it had been slow to provide its employees the tools necessary to ensure that accepted business mail meets Service standards for allowing discounts. The Service has since taken actions to provide its employees with tools, such as the Automated Barcode Evaluator machines, to help make efficient and objective business mail acceptance determinations. In May 1998, the Service began using Automated Barcode Evaluator machines as part of the business mail acceptance process. Service information shows that 258 machines have been placed in the larger business mail facilities across the country. Information obtained from the Service shows that from May 1998 through September 1999 over 291,000 mailings had been barcode tested, and almost $2.2 million in additional postage had been collected when testing showed that barcode requirements were not met. According to the Service, making maximum use of technology to accept and verify business mail is one of the four strategies the Service has established for improving its overall operations. In keeping with this strategy, according to Service officials, 500 portable barcode verifiers will be provided during fiscal year 2000 to business mail clerks at 250 business mail facilities across the country. According to the Service, these verifiers should enable business mail clerks to objectively check the accuracy of barcoded sack and tray labels on customers’ mailings. If the verifiers work effectively, they should reduce the number of mail containers incorrectly routed, a condition that slows delivery of the mail and adversely affects operational efficiency. According to the Service, an additional 180 portable barcode verifiers will be used to assist customers in improving the quality of their mail barcodes. In 1996, we reported that the Service lacked key data needed to assess the adequacy of its business mail acceptance controls and related risks. Specifically, we said that the Service lacked information on the extent to which improperly prepared mailings were entering the mail stream at reduced postage rates and the amount of rework required by the Service to correctly process and deliver this mail. We recommended that the Service develop and use valid information for evaluating the adequacy of business mail acceptance controls, procedures, staffing, and training. Our most recent work disclosed that the Service had developed two major sources of program information for evaluating business mail acceptance controls and program operations. The first source is the Enterprise Information System. This is an on-line report of acceptance processing results that is available to Service managers locally, regionally, and nationally. This system presents the results of business mail acceptance activities for a rolling 14 accounting periods and can be viewed in summary or detail form for any organizational unit as small as the individual business mail unit, or as large as a headquarters level roll up. Some of the information available in this system includes the following items: total number of business mailings accepted, total business mail volume, total business mail revenue, total number of one-pass and two-pass presort verifications performed, total number of one-pass and two-pass presort verifications with unacceptable number of errors, total number of supervisor overrides of computer-specified presorts, cost avoided on presort verifications with unacceptable level of errors, and additional postage collected as a result of acceptance processing controls. Tables 2 and 3 in this report show examples of information available from the Enterprise Information System. The second source of program information is the Business Mail Proficiency Program that began in fiscal year 1998. This program has two components. First, information to measure proficiency is gathered through a Mystery Caller program. Under this program, Service contractor personnel are to make about 250 telephone calls each quarter to business mail facilities in each Service district and ask Service employees technical questions that a business mail customer would likely ask. The second component is a survey of business mail customers. The survey, administered by Gallup, asks business mail customers their opinion on how well their needs were met on their most recent visit to a specific business mail facility. Information provided by the Service showed that the results from the Mystery Caller program and the survey are transformed into quarterly scores measuring each business mail facility’s performance in (1) technical knowledge, (2) helpfulness, (3) consistency, and (4) facility appearance. The results are to be used to modify the training curriculum for each facility’s staff to emphasize areas where the employees did not score as well as they should. For example, the Denver business mail facility manager said that fiscal year 1999 scores indicated that the Denver facility needed to emphasize training in 7 of 13 technical areas measured by the Mystery Caller program. She provided us data showing that these areas related primarily to eligibility requirements and acceptance standards for various categories of business mail. We did not determine whether the Mystery Caller program measures employee technical proficiency in any of the same technical areas where the Inspection Service found that additional training for some business mail acceptance employees was needed or whether the additional training needs identified by the Inspection Service have been addressed by the Service. Our 1996 report noted that the Service acknowledged it lacked needed information on the extent of revenue losses associated with accepting improperly prepared business mailings. Our report recommended that the Service measure systemwide revenue losses as a basis for judging whether acceptance controls were working to prevent such losses. Rather than develop a methodology to determine systemwide losses, the Service decided to use its Revenue Assurance group to identify systemwide “opportunities” to improve revenue protection processes and to see that the Service is properly compensated for all its products and/or services. This approach by the Service is intended to provide it with certain information on revenue losses, which is one of the objectives that we had in mind when we made our recommendation in 1996. The Revenue Assurance group was established in 1994 and has five goals: Identify services that the Service has provided without collection of proper postage or fees. Protect future revenue by improving processes. Ensure compliance with current policies and regulations. Promote revenue protection awareness. Communicate revenue awareness to customers and Service employees. According to the Service, as a result of using this approach, it collected $26.5 million in additional revenue at business mail units in fiscal year 1998 and an additional $30.1 million in fiscal year 1999. Revenue Assurance officials provided us information indicating that they had identified causes and taken corrective actions for the problems they found in the business mail acceptance process. For example, in 1997, the Revenue Assurance group reviewed government agencies’ permit mailings and identified numerous mailings for which the proper postage was not collected. As a result, an updated official mail handbook and data entry users guide were distributed and employees in each district received training. According to the Service, the efforts of the Revenue Assurance group, related to the changes made in handling government agencies’ permit mail at business mail acceptance locations, reduced the Service’s losses from $20.2 million in fiscal year 1998 to $7.4 million in fiscal year 1999. Since our last review, the Service has changed its business mail acceptance process generally along the lines that we recommended in our 1996 report, and the Service’s business mail acceptance control procedures, overall, appear to have improved. However, the Service still lacked comprehensive information on how well its business mail acceptance controls are working and thus cannot ensure that it is collecting all the revenue due from its business mail operations. We found that required supervisory reviews were not always being done at the business mail acceptance units we visited; while the Inspection Service found that required presort verifications were not being done at some business mail acceptance units and that employees at some business mail acceptance units needed additional training. The Service has directed its managers to ensure that the required supervisory reviews are performed; however, the Service does not have assurance that these reviews are being performed. And, the findings of the Inspection Service would further indicate that the Service does not have assurance that its controls are always working to prevent improperly prepared business mail from entering the mail stream at reduced postage rates at thousands of Service field locations. Information providing such assurances is not available. Accordingly, we do not believe that the Service has fully addressed our 1996 recommendations that it ensure that required supervisory reviews are performed and that it develop information for evaluating the adequacy of its business mail acceptance controls. We recommend that the Postmaster General direct appropriate Service officials to develop and implement approaches for providing reasonable assurance that (1) required supervisory reviews of presort verifications are done and (2) business mail acceptance controls are working as intended to prevent improperly prepared mailings from entering the mail stream at reduced postage rates and to minimize the rework required by the Service to correctly process and deliver such mail. We requested comments on a draft of this report from the Postmaster General. On October 21, 1999, we received oral comments from the Service’s Manager of Business Mail Acceptance. He stated that he generally concurred with the information and the recommendations included in the draft report. Concerning our second recommendation for reasonable assurance that business mail acceptance controls are working as intended, he said that development of a process to provide feedback on the amount of improperly prepared business mail that is being accepted for processing is a good idea and he believes that doing this in a cost- effective manner will be challenging. He also provided clarification on several technical matters, which we have included in this report as appropriate. In addition, on October 21, 1999, the Service’s Manager of Revenue Assurance provided us with updated information on the amount of additional revenue from business mail operations his group had collected in fiscal year 1999. We revised our report to reflect that information. We are sending copies of this report to Representative Chaka Fattah, Ranking Minority Member of your Subcommittee; Senator Thad Cochran, Chairman, and Senator Daniel Akaka, Ranking Minority Member, Subcommittee on International Security, Proliferation and Federal Services, Senate Committee on Governmental Affairs; William J. Henderson, Postmaster General; and Karla Corcoran, Postal Service Inspector General. We will also make copies available to others upon request. If you have any questions about this report, please call me at (202) 512- 8387. Key contributors to this report were Sherrill H. Johnson and Billy W. Scott. Bernard L. Ungar Director, Government Business Operations Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on the Postal Service's acceptance controls for business mail, focusing on whether the Service had made the changes GAO recommended previously and whether those changes were working. GAO noted that: (1) the Service made changes to its controls over the acceptance of business mail; (2) those changes are generally along the lines that GAO recommended in 1996 and its controls overall appear to have improved; (3) however, the Service lacks information on how well its controls are working Servicewide and thus cannot ensure that it is collecting all the revenue due from its business mail operations; (4) since GAO's 1996 report, the Service has: (a) developed and implemented a risk-based approach for verifying the eligibility of high-risk customers to receive discounted postage rates; (b) made changes to its presort verification, supervisory review, and documentation requirements to help provide more assurance that these functions are performed; (c) changed its business mail acceptance-control procedures and training guidelines to help supervisors and staff perform their tasks properly and made key tools available to help them more accurately determine customers' eligibility for specific postage discounts; (d) developed information sources for managers to use in evaluating business mail acceptance controls, procedures, staffing, and training; and (e) incorporated reviews of its business mail operations into a Servicewide effort to protect revenue and obtain all compensation due for its services and products; (5) on the basis of GAO's evaluation of the Service's new business mail acceptance control process, discussions with Service officials, observations of acceptance procedures at eight business mail facilities, and review of Postal Inspection Service audit reports, GAO believes that the changes the Service made to its business mail procedures and operations help to prevent revenue losses; (6) however, GAO could not determine whether all of these changes are working Servicewide because data needed to make such a determination were not available; (7) neither the results of GAO's work or the work of the Inspection Service that GAO reviewed can be projected to the universe of Service business mail facilities; and (8) however, there is sufficient evidence that the Service has not fully addressed GAO's 1996 recommendations that it ensure that required supervisory reviews are performed and that it develop information for evaluating the adequacy of its business mail acceptance controls.
|
The services and combatant commands both have responsibilities for ensuring servicemembers are trained to carry out their assigned missions. As a result, both the services and combatant commands have developed specific training requirements. Combatant commanders and service secretaries both have responsibilities related to ensuring the preparedness of forces that are assigned to the combatant commands. Under Title 10 of the U.S. Code, the commander of a combatant command is directly responsible for the preparedness of the command to carry out its assigned missions. In addition, according to Title 10 of the U.S. Code, each service secretary is responsible for training their forces to fulfill the current and future operational requirements of the combatant commands. In addition, the Office of the Secretary of Defense has issued guidance for managing and developing training for servicemembers. Specifically, DOD issued a directive, which stated the services are responsible for developing service training, doctrine, procedures, tactics, and techniques, and another that required that training resemble the conditions of actual operations and be responsive to the needs of the combatant commanders. According to Joint Publication 1, unit commanders are responsible for the training and readiness of their units. Army and Marine Corps guidance also assigns unit commanders responsibility for certifying that their units have completed all required training and are prepared to deploy. Specifically, Army Regulation 350-1 states that unit commanders are responsible for the training proficiency of their unit and, when required, for certifying that training has been conducted to standard and within prescribed time periods. In addition, a Department of the Army Executive Order states that, for the reserve component, unit commanders, in concert with service component commands, certify completion of training and the service component command—the Army National Guar or U.S. Army Reserve—validates units for deployment. Administrative Message 740/07 states that coordination of predep training is the responsibility of the unit commander and all questions concerning the training should be vetted through the commander or his operations element. Further, unit commanders validate that their units are certified for deployment, doing so through a certification message that documents the extent to which deploying Marines have successfully completed predeployment training. Headquarters, Department of the Army Executive Order 150-08, Reserve Component Deployment Expeditionary Force Pre- and Post-Mobilization Training Strategy (March 2008). Marine Corps Order 3502.6, Marine Corps Force Generation Process (Jan. 26, 2010). Combatant commanders have wide-reaching authority over assigned forces. In this capacity, CENTCOM has established baseline theater entry requirements that include training tasks that all individuals must complete before deploying to the CENTCOM area of operations. Specifically, these CENTCOM training requirements include minimum training tasks for both units and individuals. Required individual tasks include, but are not limited to, basic marksmanship and weapons qualification, high-mobility multipurpose wheeled vehicle (HMMWV) and mine resistant ambush protected (MRAP) vehicle egress assistance training, non-lethal weapons usage, first aid, counter-improvised explosive device training, and a number of briefings including rules of engagement. The services have established combat training requirements that their servicemembers must complete at various points throughout their careers. During initial entry training, recruits are trained on service tasks and skills, including basic military tactics, weapons training, and marksmanship. In addition, the services have annual training requirements that are focused on tasks such as crew-served weapons training, reacting to chemical and biological attacks, and offensive and defensive tactics. Prior to deploying overseas, servicemembers must also complete a set of service directed predeployment training requirements. These predeployment requirements incorporate the combatant commander’s requirements for the area where the forces will be deployed. U.S. Army Forces Command and the Commandant of the Marine Corps have both issued training requirements for forces deploying to the CENTCOM area of operations or in support of operations in Iraq and Afghanistan. These documents also require that units complete a final collective event prior to deployment to demonstrate proficiency in collective tasks. Lessons learned are defined as results from an evaluation or observation of an implemented corrective action that produced an improved performance or increased capability. The primary vehicle for formally collecting and disseminating lessons learned information is the after action report. Army and Marine Corps guidance require that units submit after action reports to the services’ respective lessons learned centers. Army Regulation 11-33 established its Army Lessons Learned Program to create an information sharing culture and a system for collecting, analyzing, disseminating, integrating, and archiving new concepts, tactics, techniques, and procedures. The regulation further assigned the Center for Army Lessons Learned (CALL) primary responsibility for the Army Lessons Learned Program. The Marine Corps established its Marine Corps Center for Lessons Learned (MCCLL) to provide a relevant, responsive source of institutional knowledge that facilitates rapid adaptation of lessons into the operating forces and supporting establishments. The Army and Marine Corps have both formal and informal approaches to collect and disseminate lessons learned information. Their formal approaches often rely on a wide network of MCCLL and CALL liaison officers at training centers and in Iraq and Afghanistan, but the centers also publish relevant information on their Web sites to make it widely available. The informal networks based on personal relationships between unit commanders, trainers, or individual soldiers and marines have also facilitated the sharing of lessons learned information. GAO has previously reported on combat skills training provided to nonstandard forces. In May 2008, we reported that the Air Force and Navy waived CENTCOM established training requirements without consistently coordinating with the command, so CENTCOM lacked full visibility over the extent to which all of its forces were meeting training requirements. We recommended that the Secretary of Defense direct the Office of the Secretary of Defense, Personnel and Readiness, in conjunction with the Chairman of the Joint Chiefs of Staff, develop and issue a policy to guide the training and use of nonstandard forces, to include training waiver responsibilities and procedures. DOD agreed with our recommendation, stating that it had work underway to ensure that the necessary guidance was in place for effective training of nonstandard forces. However, as of February 2010, it had not issued such guidance. Although Army and Marine Corps support forces undergo significant training, they may not consistently or successfully complete all required training tasks prior to deploying. Both CENTCOM and the services have issued predeployment training requirements. However, some of CENTCOM’s training requirements lack associated conditions and standards, and confusion exists over which forces the requirements apply to. In addition, the Army and Marine Corps have not included certain CENTCOM required tasks in their predeployment training requirements, and unit commanders can certify their units for deployment even if all the required individual and collective training tasks have not been successfully completed. The services provide combat skills training to their servicemembers, including support forces, at various points throughout their careers. During initial entry training, recruits are trained on service tasks and skills, including basic military tactics, weapons training, and marksmanship. In addition, servicemembers participate in annual training that is focused on tasks such as crew-served weapons training, reacting to chemical and biological attacks, and offensive and defensive tactics. Soldiers and marines also participate in combat skills training prior to deploying for any overseas operations. As a result, the predeployment combat skills training that support unit personnel receive should be viewed as a significant piece of their training to operate in an asymmetric environment, but not as their only training to operate in that environment. CENTCOM has issued a list of training tasks that all individuals assigned to its area of responsibility, including support unit personnel, must complete before deploying in support of ongoing operations in Iraq and Afghanistan. While the CENTCOM training requirements outline tasks that must be trained, the command does not always clearly define the conditions and standards to which all of the tasks should be trained. Task conditions identify all equipment, tools, materials, references, job aids, and supporting personnel required to perform the task, while standards indicate the basis for judging effectiveness of task performance. For some training tasks, CENTCOM includes specific guidance. For example, weapons qualification requirements include a detailed discussion of when the qualification must take place, equipment that must be worn, and range distances. For some training tasks, however, CENTCOM does not provide any conditions or standards. For example, as noted above, CENTCOM requires that all deploying forces complete HMMWV rollover training, but it does not specify how the training should be conducted. Consequently, service training has varied within and among the Army and Marine Corps. At one Marine Corps site, training officials explained that HMMWV rollover training could be completed in less than a half hour. On the other hand, trainers at one Army training site noted that their HMMWV rollover training consisted of a full day of training that included a classroom overview and hands-on practice in a simulator with both day and night scenarios, pyrotechnics to simulate improvised explosive devices, and the incorporation of casualty evacuation procedures. For other training tasks, the CENTCOM requirements contain only general guidance on training conditions. For example, for some tasks such as first aid and improvised explosive device training, CENTCOM requires that classroom training be followed up with practical application during field training that mimics the harsh, chaotic, and stressful conditions servicemembers encounter in the CENTCOM area of operations. However, the requirements do not identify the materials or training aides to be used in conducting the training and they do not indicate the standard for successfully completing the training. While service officials acknowledged that, as outlined in Title 10 of the U.S. Code, it is their responsibility to train servicemembers, they stated that CENTCOM’s list of minimum theater entry training tasks was unclear, which resulted in varying service interpretations of the tasks. Furthermore, CENTCOM training requirements are communicated to the services in a document that also outlines training requirements for joint sourced forces. Service officials have expressed confusion over these training requirements and the extent to which they apply to all forces given that the tasks are listed in a document that focuses primarily on unit training requirements for joint sourced forces. Service officials reported that changes to training requirements have also added to the confusion over training requirements and priorities. While the latest set of CENTCOM requirements contained in the joint sourced forces document was issued on May 7, 2009, ground commanders have issued several requirements since then. For example, in January 2010, the Commander, U.S. Forces- Afghanistan, issued an order that contained additional training requirements for all forces deploying to Afghanistan. However, CENTCOM officials said that these Afghanistan-specific requirements had not yet been validated. When CENTCOM validates new requirements it promulgates them in several different ways, including in updates to the training requirements contained in the joint sourced forces document, in individual request for forces, or by CENTCOM messages. While the Army and Marine Corps have provided most of the CENTCOM required training, in some cases, they have not provided training on the specific tasks called for by CENTCOM. For example, neither service has provided MRAP vehicle rollover training to all of their support forces. MRAP vehicle rollover training has been identified as a key combat skill for deploying forces. MRAP vehicles have much larger profiles and weights than the vehicles they replaced in theater, and as a result, pose a greater risk of tip or rollover when negotiating slopes, trenches, ditches, and other obstacles. Further, rollover risks are higher in Afghanistan due to uneven terrain and sub-par road conditions. A November 2009 DOD study on MRAP vehicle rollovers noted that since 2007, 178 MRAP vehicle mishaps involved some type of rollover that resulted in a total of 215 injuries and 11 fatalities. The study recommended more practice on rollover drills, and CENTCOM has required this training for all deploying forces. According to Marine Corps officials, the Marine Corps is prioritizing MRAP vehicle rollover training, and current Marine Corps guidance requires this training only for marines expected to utilize MRAP vehicles. However, use of these vehicles in theater has been increasing, and officials at I Marine Expeditionary Force explained that they are trying to train deploying forces to meet the MRAP vehicle rollover training requirement. A rollover trainer was originally scheduled to arrive at their training area in February 2010, but the delivery has been delayed and there is currently not a projected delivery date. Army officials explained that they have attempted to meet the CENTCOM requirement, but that a lack of MRAP rollover trainers at the Army’s training bases in the United States has prevented them from fully training all forces on this task prior to deployment. In the meantime, some support forces are getting required training after they deploy, but Army officials were unable to confirm whether all forces were getting the required training. Moreover, neither the Army nor the Marine Corps have provided non- lethal weapons training to all deploying support forces. CENTCOM requires that all individuals deploying to its area of responsibility complete training in non-lethal weapons usage, planning, and understanding of non- lethal weapons capability sets. DOD reported in December 2009 that operational experience dictates the need for forces to be trained in non- lethal weapons and that current operations have highlighted the imperative for the discriminate use of force to minimize civilian casualties and the integral role that non-lethal weapons capabilities provide in achieving that objective. In that report, DOD noted that non-lethal weapons training has been mandated by CENTCOM for all deploying forces and that non-lethal weapons training must be further integrated into service training. Further, GAO has previously reported that DOD needed to provide clearer weapons employment guidance for non-lethal weapons and incorporate this guidance into training curricula. Due to the confusion over what forces CENTCOM’s joint sourced training requirements apply to, Marine Corps officials explained that they do not believe the non-lethal weapons training requirement applies to them and do not require this training. The Army requires non-lethal weapons training only for combat arms units. Army officials explained that they do not have sufficient resources to train all deploying forces, including support forces, on non-lethal weapons, but have not sought formal waivers for this task. According to Joint Publication 1, unit commanders are responsible to their respective Service Chiefs for the training and readiness of their unit. Service guidance emphasizes this responsibility, assigning unit commanders’ responsibility for the coordination and completion of predeployment training and validating that servicemembers are certified for deployment. Before forces deploy, Army and Marine Corps guidance requires that units complete a final collective training event. These events can vary based on unit type, assigned mission, and the theater of operations and provide an opportunity for the unit to demonstrate proficiency in collective tasks. While service guidance requires that units undergo a final collective training event, the guidance does not specifically require that units successfully complete the training before commanders can certify their units for deployment. Army and Marine Corps officials explained that if a support unit does not demonstrate combat skills proficiency during the final event, when and where remediation is to occur is left to the discretion of the individual unit commander and can be completed in theater after deploying. For example, a Marine Corps combat logistics battalion that deployed in January 2010 was assessed fully trained in its logistics mission, but not proficient in basic warrior tasks during its final collective training event at Exercise Mojave Viper. Specifically, the unit was not proficient in fifteen of sixteen warrior tasks including reacting to ambush, escalation of force, individual continuing actions, and casualty evacuation procedures. The Marine Corps logistics training officer who conducts the final unit after action reviews for combat logistics battalions explained that poor ratings on basic warrior skills were not uncommon for support units during their final collective training event. While the unit conducted remedial training on casualty evacuation procedures prior to deployment, it did not conduct remedial training in other areas, since the unit had 15 days to complete both required training that they were unable to accomplish prior to Exercise Mojave Viper and remedial training, and the unit deployed on time. Service officials explained that it is the responsibility of unit commanders to exercise judgment in assessing whether the unit has the collective skills needed to accomplish its mission. However, without visibility over the completion of remediation, Army and Marine Corps support forces may not successfully complete all CENTCOM or service required training tasks prior to deploying. The Army and Marine Corps take steps to document the completion of required combat skills training tasks, but face inconsistencies in the way the services track completion of training. While the Army has a service- wide system of record for tracking the completion of training requirements, the system is not being fully utilized. Furthermore, the Marine Corps lacks a service-wide system for tracking the completion of training requirements. Instead, both services rely on paper rosters and stand-alone spreadsheets and databases to track training completion. In addition, even though CENTCOM requires that all forces deploying to its area of responsibility complete a set of required training tasks, the command lacks a clearly defined process for waiving individual training requirements if they cannot be met. According to Joint Publication 1, unit commanders are responsible to their respective Service Chiefs for the training and readiness of their units. Service guidance emphasizes this responsibility, assigning unit commanders’ responsibility for coordinating and completing predeployment training and validating that servicemembers are ready for deployment. Higher level decision-makers, including the higher headquarters elements of the units in training, are then responsible for validating the unit commanders’ assessments. The Army and Marine Corps take slightly different approaches to validating units for deployment, particularly as it applies to the Army’s reserve component. While the Army and Marine Corps active components rely heavily on unit commanders to validate units and higher headquarter elements, such as brigade and division commanders for the Army’s active component and the Marine Logistics Groups and Marine Expeditionary Forces for the Marine Corps, to validate the commander’s assessment, the Army’s reserve component relies heavily on a validation board that convenes at the completion of a unit’s training at a mobilization training center. However, according to Army officials, in the end, the final decision is largely based on individual unit commanders’ assessments of the readiness of their units. While the Army issued guidance requiring tracking of training completion through a servicewide system, the system has not been fully utilized. In December 2009, the Army updated a training regulation and required that all individual and collective training tasks be documented for soldiers through the Digital Training Management System (DTMS) in order to better standardize training. Army units were required to report completion of certain requirements, such as suicide prevention classes and the Army physical fitness test tasks, in DTMS prior to the revision of this regulation. However, the revised regulation designates DTMS as the only authorized automated system for managing unit training and requires units to track each individual soldier’s completion of all required training tasks, to include all predeployment individual and collective training. The regulation was effective as of January 18, 2010, and states that DTMS will be able to provide units with the ability to plan, resource, and manage unit and individual training. However, as of February 2010, the system was not fully operational, and while active component units were able to enter all of their data into DTMS, reserve component units were not yet able to do so because of a lack of interfaces among existing tracking systems and DTMS. The Army has not yet developed a detailed schedule with milestones and resource requirements for fully developing the capability for reserve component units to input data. Neither has it established milestones for active and reserve component units to enter data into the system. Furthermore, the guidance does not assign responsibility for ensuring compliance and does not make it clear whether previously completed training needs to be entered into the system or only training that is completed after the January 18, 2010, implementation date. The Army’s active and reserve components have both begun using DTMS, but DTMS is not being fully or consistently used by either component. U.S. Army Forces Command officials reported that the capabilities of DTMS are fully operational among the active component, but that units have not consistently used the system. During our discussions with commanders from four active component battalions in February 2010, we found that the system, while operational, was not being fully utilized. We noted that the battalions used DTMS to different degrees. Specifically, two commanders said that their battalions relied on DTMS to track training schedules and some tasks, such as weapons qualification and physical fitness, but they said that their battalions did not track completion of all required tasks down to the individual soldier level. The other two battalion commanders noted that they did not use DTMS to track completion of any training tasks. Overall, none of the four battalions used DTMS the way the Army intended it to be used, but emphasized interest in incorporating the system into how they track training. First Army officials reported that DTMS is not fully operational among the reserve component. Army officials reported that not all of the individual systems the reserve component used to track completion of training were interchangeable with DTMS, and as such, the system was not fully operational. Moreover, in our discussions with unit commanders from five Army Reserve units and one National Guard unit in November 2009, we noted that the system was not being utilized. In fact, none of those commanders were familiar with DTMS despite the fact that the Army had required the entry of suicide prevention classes and the Army physical fitness test tasks into DTMS by September 2009. Instead of using DTMS, Army support units rely on tools such as paper rosters and stand-alone spreadsheets and databases to track completion of individual and unit training, and the tools used are not consistent among units and commands. For the reserve component, First Army has established an Excel spreadsheet, referred to as the Commander’s Training Tool, to track completion of individual training tasks. According to officials, the tool, intended to serve as an “in-lieu-of” system until DTMS reached full operational capability, is used as a model for tracking systems at the individual mobilization training centers. Specifically, officials at one mobilization training center told us that they had developed an individualized tracking system based on the Commander’s Training Tool, but had tailored the system to meet the needs of the individual command. Within the active component, unit commanders we spoke with noted that they also rely on tools such as paper rosters and stand-alone spreadsheets and databases to track completion of individual and unit training at the battalion level and below, providing regular status updates to the brigade and division commanders. Reliance on various inconsistent tracking mechanisms instead of the servicewide DTMS limits the visibility unit commanders have over completion of required training tasks. The Marine Corps also uses inconsistent approaches to track completion of required training and relies instead on paper rosters and stand-alone spreadsheets for tracking. Specifically, 2nd Marine Logistics Group officials said that individual units are responsible for tracking completion of individual training and that this tracking is completed through large Excel spreadsheets, but that the information is regularly reviewed by the Marine Logistics Group. A commander from a support unit within the 2nd Marine Logistics Group noted that training was tracked and reviewed using Excel spreadsheets. Further, the unit’s operations officer noted that within the battalion, individual training is tracked at the company level, and once a week, the information is provided to the battalion operations officer, who then briefs the battalion commander on overall percentages of marines who have completed the required tasks. We also spoke with officials from the 1st Marine Logistics Group who noted that the individual units are responsible for tracking the completion of both individual and unit training requirements. While the 1st Marine Logistics Group provides units with a summary level spreadsheet to report the status of the unit training, the individual units are responsible for tracking the completion of individual training and the Marine Logistics Group does not track the completion of individual training. Officials from the 1st Marine Logistics Group noted that unit operations officers have visibility over individuals and their respective training, and this information is rolled up and provided at a high level to the Commanding Officer. A commander of a support unit we spoke with noted that his unit used the Excel spreadsheet provided by the 1st Marine Logistics Group to track completion of individual training requirements, with individual tracking being done at the company level. Further, sometimes when marines transfer among units, documentation of completed training tasks is not provided to the receiving unit. For example, a support battalion operations officer we spoke with noted that the battalion received many marines throughout the deployment process, but some marines arrived without documentation of the training they had previously completed. In the absence of a consistent approach to track completion of training tasks, the Marine Corps relies on inconsistent tracking mechanisms among individual units and commands. These inconsistent tools limit the visibility unit commanders have over completion of required training tasks, particularly when marines are transferred from one unit to another for deployment purposes. While CENTCOM has issued a consolidated list of minimum theater entry requirements for all individuals deploying to its area of responsibility, it has not issued overarching waiver guidance or established a formal process for waiving each of these requirements (e.g., basic marksmanship and weapons qualification, law of land warfare, and HMMWV and MRAP vehicle egress assistance training) in circumstances where the requirements are not going to be met. However, CENTCOM officials provided an example of a case where waiver requirements for one specific task were outlined. In September 2007, the command issued a message requiring HMMWV egress assistance training for all forces deploying to its area of responsibility. This requirements message included steps the services needed to take to waive the requirement in the event that the training could not be completed by 100 percent of the deploying personnel before deployment. However, a similar waiver process is not outlined for other required CENTCOM tasks. Officials from both the Army and Marine Corps noted that there are instances where servicemembers are not completing all of the required training. Specifically, when we spoke to unit commanders and unit training officers, we were told that some personnel were not meeting these individual training requirements and that units were not requesting formal waivers from CENTCOM or communicating this information to CENTCOM. For example, an operations officer from a Marine Corps’ combat logistics battalion reported that some of the unit’s deploying marines would not complete their required individual training tasks, such as the CENTCOM-required MRAP vehicle egress training. Moreover, the commander of an active component Army support battalion noted that in validating his unit for deployment, he did not focus on completion of individual tasks, instead assessing the unit’s ability to complete tasks collectively. As such, the unit commander’s decision was not based on whether all individuals completed all of the required individual training tasks. There is no clearly defined process for waiving these training requirements, and there is no clear or established method for the services to report to CENTCOM that some servicemembers are not completing CENTCOM’s required training. As a result, CENTCOM cannot determine if additional training is required following arrival in theater. In May 2008, we reported that the Air Force and Navy implemented procedures for waiving CENTCOM-required training without fully coordinating with the CENTCOM headquarters office responsible for developing the training requirements. Specifically, we reported that Navy nonstandard forces that completed Navy combat skills training more than 90 days prior to their deployment would normally have to update their training by repeating the course, but that they could waive this requirement if they completed relevant combat skills training that significantly exceeded what they would have received in the Navy course. We further reported that the Air Force granted waivers for combat skills training on a case-by-case basis. At the time, CENTCOM officials noted that the services had not consistently coordinated these waiver policies with their command. Therefore, CENTCOM did not have full visibility over the extent to which its assigned forces had met its established training requirements. At the time, we recommended that the Office of the Secretary of Defense develop a policy to guide the training and use of nonstandard forces, and the policy include training waiver responsibilities and procedures. In February 2010, an official from the Office of the Secretary of Defense reported that they planned to issue a revised policy on non-standard forces by the end of the year, and that the revised guidance would address the issue of granting waivers. Furthermore, during our review, we learned that CENTCOM’s lack of visibility applies to a larger population of forces than just the Air Force and Navy nonstandard forces, instead applying to all forces deploying to the CENTCOM area of responsibility. The Army and Marine Corps have made significant changes to their combat skills training for support forces as a result of lessons learned, but the services have not uniformly applied lessons learned. Both the Army and Marine Corps require the collection of lessons learned information, and each service relies on formal and informal collection methods to obtain relevant information. While it can take time to incorporate lessons learned into service doctrine, service training facilities are often able to utilize lessons learned to adjust their training almost immediately. However, training facilities do not consistently share information obtained as a result of lessons learned or share changes made to training as a result of lessons learned among other facilities, resulting in servicemembers being trained inconsistently. As such, support forces have been deploying for similar missions with different training. The Army and Marine Corps collect lessons learned information through both formal and informal processes, and they have made significant changes to their training and deployment preparations as a result of this information. Army and Marine Corps doctrine require the formal collection of lessons learned and designate after action reports as the primary vehicle for this formal collecting of lessons learned information. Trainers and units noted that they prepare after action reports at several different times including after final collective training exercises and during and after deployment. Depending on the complexity of the deficiency that is addressed in an after action report and the resources required to address the deficiency, it can sometimes take considerable time to see actions that result from formal after action reports. However, after action reports have resulted in changes to the way the services train and deploy their forces, as the following examples illustrate. In July 2009, the Marine Corps officially established and began training Female Engagement Teams, small detachments of female marines whose goal was to engage Afghan women. The concept of a Female Engagement Team was first introduced in February 2009 as part of a special operations mission in Afghanistan. An after action report emphasizing the need for forces to be organized and trained to engage Afghan women was submitted in response to an incident in May 2009, in which the enemy escaped dressed as women because male Marines were not allowed to engage Afghan women. As a result, the Marine Corps expanded the use of the Female Engagement Team concept, developing an actual program and implementing a training plan. In December 2009, U.S. Forces-Afghanistan released a memorandum that emphasized the need for increased training and use of Female Engagement Teams. Prior to that time, the use of Female Engagement Teams was primarily a Marine Corps effort. However, the memorandum stated that all services should create these teams, and since the memorandum was issued, officials noted that the Army has begun to assess how it can best meet the needs in theater for these teams with its available personnel. In November 2009, the 1st Marine Logistics Group established and conducted a new predeployment training course for support forces that focused on combat logistics patrols. The course was developed in response to at least two different units’ after action reports, one submitted by a unit returning from Afghanistan and another submitted by a unit undergoing final predeployment training, which highlighted the need for leaders of support units to receive additional training and experience with combat patrols. The redeploying unit’s after action report identified shortcomings in how support units conducting convoy missions outside of forward operating bases were trained, and the unit undergoing final training’s after action report identified deficiencies in the amount of time spent on training. The new 5-day course—the Combat Logistics Patrol Leaders Course—focuses on providing support units with the skills they need to conduct combat logistics patrols, which require support forces to leave protected areas where they can become the target for enemies, as opposed to simply convoy missions conducted inside protected forward operating bases. The services also rely on lessons collected through informal means when adjusting predeployment training. Informal collection methods include obtaining feedback from units currently deployed in Iraq and Afghanistan through informal discussions, observations made by trainers or deploying unit leaders during brief visits to theater, and informal conversations among personnel within service commands and training organizations. Army and Marine Corps officials stated that there is regular communication between personnel who are deployed in theater and the personnel who are preparing to deploy to replace them. Furthermore, they said that the deployed personnel often provide vital information regarding the current conditions in Iraq and Afghanistan, which the deploying unit commander and trainers can use to make immediate adjustments to training. Much like changes made as a result of formal lessons learned, the informal collections have also resulted in changes to the way the services train and deploy their forces, as the following examples illustrate. An Army installation established an Individual Replacement Training program to provide individual replacement soldiers with the combat skills needed to join their parent units in theater. Army officials noted that approximately 2 years ago, certain units were tasked to train these individual replacements on a 4- to 5-month rotating basis. However, the units that conducted the training were unable to keep pace with the flow of individual replacements because of their high pace of operations. Based on feedback obtained from the units and observations by unit leadership, Army civilians were assigned responsibility for the training, which resulted in the Individual Replacement Training program. As of 2009, the Individual Replacement Training program trained approximately 3,400 soldiers, and combat skills have been trained more consistently. Since improvised explosive devices are commonly used against military forces in Iraq and Afghanistan, training regarding the defeat of these devices is a CENTCOM predeployment training requirement and was cited as a key focus at the training facilities we visited. Officials we spoke with explained that improvised explosive devices pose a serious threat to military forces because the types of devices the enemies use constantly change. While training facilities have incorporated the most recent improvised explosive device defeat tactics into their training based on information provided by the Joint Improvised Explosive Device Defeat Organization, they also obtain and immediately incorporate the tactics provided informally by individuals in theater. Trainers at the sites we visited told us that they had made adjustments to training based on both informal and formal lessons learned information that they had received. However, they also told us that they did not consistently share information about the adjustments they had made with other sites that were training forces on the same tasks, and even in cases where the information was shared, there were still some differences in the training that was being provided to deploying support forces. For example: One site significantly enhanced its HMMWV rollover training based on informal feedback. Specifically, the training was enhanced to include hands-on practice in a simulator with both day and night and land and water scenarios, as well as an emphasis on new vehicle features, such as the dual release seatbelts, when exiting the vehicle in an emergency. While trainers from this site provided information about these enhancements to some of their counterparts at other training facilities, HMMWV rollover training varies significantly from site to site. At one of the sites we visited, HMMWV rollover training consisted simply of a short demonstration. At one training site we visited, trainers were teaching Army Reserve support forces who had not been mobilized specific tactics for entering and clearing buildings, while other trainers at the same site were teaching soldiers who had been mobilized different tactics for the same task. Officials we spoke with stated that these differences in tactics are a result of a lack of sharing of information among trainers. Specifically, the First Army trainers who were training soldiers after mobilization were not consistently sharing information with U.S. Army Reserve trainers who were training soldiers prior to mobilization. Since one of the primary purposes for conducting repetitive training is to develop an intuitive response to certain circumstances, repetitive training that employs different tactics may not be as effective as repetitive training that uses consistent tactics. Although officials at the training facilities we visited note that they have made efforts to share some of the information obtained and subsequent changes made as a result of lessons learned with their counterparts at other training facilities, the sharing has been inconsistent. According to a Chairman of the Joint Chiefs of Staff Instruction, organizations participating in the joint lessons learned program are to coordinate activities and collaboratively exchange observations, findings, and recommendations to the maximum extent possible. While the services have formal and informal means to facilitate the sharing of lessons learned information, trainers at the various training sites are not consistently sharing information about the changes they have made to their training programs. As a result, servicemembers are trained inconsistently and units that are deploying for similar missions sometimes receive different types and amounts of training. U.S. forces deployed to CENTCOM’s area of responsibility, including support forces, are operating in an environment that lacks clear distinctions between the front lines and rear support areas. As a result, support units such as military police, engineers, and medical personnel may be exposed to hostile fire and other battlefield conditions. The Army, Marine Corps, and CENTCOM continue to emphasize the importance of training and have identified specific tasks to be accomplished as part of predeployment training that they believe will better prepare forces to operate in the current operational environment. While forces clearly undergo significant training, clarifying CENTCOM’s training requirements, including more clearly defining the specific tasks to be completed by different types of forces and the conditions and standards for the content of training, would enhance the service’s ability to ensure that forces are consistently trained on required tasks. Furthermore, in order to make informed decisions on deploying forces and assigning missions once deployed, the services and CENTCOM need information on the extent of training completed by forces prior to deployment. Inconsistencies in existing approaches for documenting the completion of training and the lack of a formal process for granting waivers to training and communicating waiver decisions hamper the services and CENTCOM in their ability to get a clear picture of which units or individuals have been fully trained for certain missions and whether any capability gaps might exist upon the forces’ arrival in theater. Last, the services are making significant adjustments in training regimens based on captured lessons learned from actual operational experiences. However, additional efforts to share information on these adjustments among and within training facilities would provide greater assurance that the training is consistent. To improve the consistency of training, we recommend that the Secretary of Defense: direct the commander, U.S. Central Command to: clarify which of the command’s mandatory training requirements apply to all forces deploying to CENTCOM’s area of responsibility and which requirements apply only to joint sourced forces, and clearly communicate this information to the services. clearly outline the conditions under which CENTCOM’s mandatory training requirements are to be accomplished and the standards to which the tasks should be trained. direct the Secretary of the Army and the Commandant of the Marine include all of CENTCOM’s minimum training requirements in their service training requirements. To improve commanders’ visibility over the extent to which support forces are completing required combat skills training, we recommend that the Secretary of Defense direct the Secretary of the Army to fully implement the service’s system of record for tracking training completion—the Digital Training Management System by (1) developing a schedule for fully implementing the system, including the work to be performed and the resources to be used, and (2) including the actual start and completion dates of work activities performed so that the impact of deviations on future work can be proactively addressed. We further recommend that the Secretary of Defense direct the Commandant of the Marine Corps to establish and fully implement consistent approaches for documenting the completion or waiving of combat skills training requirements. We are also broadening our prior recommendation on waiver oversight and recommending that the Secretary of Defense direct the commander, U.S. Central Command, to establish a formal process for waiving training requirements for all deploying forces, not just nonstandard forces, and to communicate this process to the services. To maintain training consistency as training evolves in response to ongoing operations, we recommend that the Secretary of Defense direct the Secretary of the Army and the Commandant of the Marine Corps to develop a method for consistently sharing information concerning changes that are made to training programs in response to formal or informal lessons learned. In written comments on a draft of this report, DOD concurred or partially concurred with our recommendations. Specifically, DOD concurred with our six recommendations related to the definition, completion, and waiver of training requirements, and sharing information on changes to training based on lessons learned. DOD stated that it has inserted draft language into its 2010 update to the “Guidance for the Development of the Force” and its draft DOD Instruction 1322.mm entitled “Implementing DOD Training” to address our recommendations. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army to fully implement the Digital Training Management System (DTMS)—the service’s system of record for tracking training completion—by (1) developing a schedule for fully implementing the system, including the work to be performed and the resources to be used, and (2) including the actual start and completion dates of work activities performed so that the impact of deviations on future work can be proactively addressed. In its comments, DOD stated that the Army’s training management system of record has been directed to be implemented and that in order to fully leverage this capability, it will take time, training and resources to extend the system to the entire organization. Instead of stipulating DTMS, DOD requested that GAO address (in our recommendation) more generally the Army’s training management system of record. We recognize that it will take time for the Army to fully implement the system, but also note that it has not set a specific schedule, with key elements, such as work to be performed, resources needed, and milestones for start and completion of activities, which we believe will add discipline to the process, help guide its efforts, and help the Army to plan for any schedule deviations. We recognize that the Army continues to refine DTMS and that changes could occur. However, at this point in time, Army guidance specifically characterizes DTMS as the Army’s training management system of record; therefore, we do not agree that our recommendation should be adjusted. Furthermore, DOD stated that some findings in the draft report are partially accurate, but that a number of points of information and clarification related to DTMS provided by the Department of the Army do not appear in the findings. For example, DOD noted that ongoing efforts by the Army designed to improve DTMS will expand existing functionality and interfaces to enhance and broaden operational use of the application by Army units. It noted the Army has a review process that, among other things, monitors progress of DTMS implementation and allows for the establishment and approval of priorities for developing interfaces with other existing legacy systems and manual processes. In addition, DOD stated that the report cites that DTMS is not fully operational because all interfaces are not completed to the satisfaction of a subordinate organization, which, in DOD’s view, does not drive the level of program functionality or define the point in time when the system is fully operational. DOD noted that the inclusion of updated interfaces enables data input from other sources and that the basic functionality of DTMS is in place, operational, and available for use by units across the Army. DOD also noted some Army units are still using spreadsheets and/ or legacy systems to track individual training rather than DTMS, but that this is a function of compliance, not operational capability or the availability of system interfaces. It further stated that the Army is currently working to institute methods to improve compliance as outlined in AR 350-1, the Army’s regulation that guides training. We recognize that the basic functionality of DTMS exists and that the Army is continuing to take steps to implement DTMS, improve the interfaces between DTMS and legacy systems and processes, and improve overall compliance with the requirement for units to report in DTMS. However, our work suggests that it is not only a lack of compliance preventing full utilization of the system, but also a lack of awareness among all of the operational units that DTMS even exists. For example, within the reserve component, some unit commanders we interviewed were unfamiliar with DTMS or that they were required, by Army guidance, to use the system to report training completion. Further, while we recognize interfaces exist, our work shows they are not fully mature to the point where they are compatible with existing tracking systems, thereby limiting the ability of the reserve component to fully use DTMS as intended. DOD further noted that the report infers that DTMS could or should be the source for CENTCOM and the Army to certify and/ or validate unit training for deployments, but due to it not being fully utilized, the completion of combat skills training could be in question. DOD explained that DTMS is a training management system, and it is the responsibility of Commanders and Army Service Component Commands to certify and validate units. As stated in our report, we recognize that commanders and the service component commands are responsible for the certification and validation of units for deployment. However, in order to be more fully informed about the training and readiness status of units before making decisions about deployments, those making these decisions need visibility over the completion of the combatant command and service pre-deployment training requirements. Currently, DTMS does not provide unit commanders or service component commands with this type of visibility, and therefore, these individuals and commands must rely on the tracking mechanisms we outlined in this report when certifying and validating units, and these tracking mechanisms are not always complete or consistent. The full text of DOD’s written comments is reprinted in appendix II. We are sending copies of this report to the Secretary of Defense. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To assess the extent to which Army and Marine Corps support forces are completing required combat skills training, we reviewed combatant commander and service individual and unit predeployment training requirements, including CENTCOM’s Theater Entry Requirements, the U.S. Army Forces Command’s Predeployment Training Guidance for Follow-on Forces Deploying In Support of Southwest Asia, and Marine Corps Order 3502.6, Marine Corps Force Generation Process. To determine if the services were fully addressing the CENTCOM minimum requirements, we compared the CENTCOM minimum training requirements to the Army and Marine Corps minimum requirements, making linkages where possible and obtaining service explanations when linkages did not appear to exist. We also reviewed policy documents on service training, such as the services’ common skills manuals and training programs of instruction. Additionally, we interviewed and analyzed information from officials responsible for developing and implementing training requirements at CENTCOM, Department of the Army Training Directorate, U.S. Army Forces Command, First Army, U.S. Army National Guard, U.S. Army Reserve Command, Marine Corps Training and Education Command, and Marine Forces Command. Lastly, we observed support force training at four of the Army and Marine Corps’ largest training facilities— Fort Dix, Camp Lejeune, Camp Pendleton, and Twentynine Palms Marine Corps Base. At the training sites, we interviewed and collected various training-related documents from Army and Marine Corps active and reserve component units participating in predeployment training as well as training command officials on the implementation of service training guidance. We also obtained information from Army active component support forces stationed at Fort Hood. To assess the extent to which the services and Central Command have information to validate the completion of required combat skills training, we reviewed Army and Marine Corps policies on training, including Army Regulation 350-1, which outlines requirements for servicewide tracking through the Digital Training Management System, and Marine Corps Order 3502.6, Marine Corps Force Generation Process. We also coordinated with the U.S. Army Audit Agency regarding their ongoing efforts in reviewing the Digital Training Management System. We interviewed service headquarters officials to discuss the processes the services use to track completion of training requirements. We reviewed Joint Publication 1, and other joint and service policies that document the role and responsibilities of unit commanders in tracking and reporting completion of training requirements. We interviewed Department of the Army Training Directorate, Marine Corps Training and Education Command, U.S. Army Forces Command, Marine Forces Command, First Army, and U.S. Army Reserve Command officials and reviewed documents from these commands, which are involved in the process of tracking the completion of combat skills training. Additionally, we interviewed an Army training command and the 1st, 2nd, and 4th Marine Corps Logistics Groups to discuss the processes used to track completion of training requirements at the unit level. We reviewed the means these organizations use to document the extent to which servicemembers were completing required training—paper records, automated spreadsheets, and databases. We further interviewed thirteen unit commanders of units preparing to deploy or returning from deployment to identify individual processes being used to track completion of training requirements. Lastly, we interviewed and obtained information from officials representing CENTCOM, Army and Marine Corps headquarters, and the Army and Marine Corps force providers and training commands to discuss the processes the services use to waive service and combatant command training requirements. We also reviewed past related GAO reports regarding the tracking and waiving of training requirements. To assess the extent to which the Army and Marine Corps have applied lessons learned from operational experiences to adjust combat skills training for support forces, we reviewed service policies on the collection and dissemination of lessons learned, specifically Army Regulation 11-33 for the Army Lessons Learned Program and Marine Corps Order 3504.1 for the Marine Corps Lessons Learned Program and the Marine Corps Center for Lessons Learned. These policies, which establish the services’ lessons learned centers, also require the collection of after action reports. Further, we reviewed joint guidance to determine whether requirements existed for the training facilities and services to collaborate and share lessons learned information. We interviewed and obtained information on the collection and implementation of lessons learned from officials representing the Center for Army Lessons Learned and the Marine Corps Center for Lessons Learned. We also interviewed lessons learned liaisons, training command officials, trainers, and officials responsible for developing unit training plans at five of the Army and Marine Corps’ largest training sites— Fort Hood, Fort Dix, Camp Lejeune, Camp Pendleton, and Twentynine Palms. While interviewing officials from the lessons learned centers and the training facilities, discussions included: the use of various lessons learned to alter and improve predeployment training; the types of products the centers create and distribute; and the extent to which trainers shared the information among training sites. Based on these discussions with lessons learned officials, we identified and reviewed a nongeneralizable sample of the formal lessons learned reports and handbooks that applied specifically to training for support forces. We also reviewed past related GAO and DOD reports regarding lessons learned. To gain insight on support forces’ perspectives on completion of combatant command and service combat skills training requirements, we conducted discussions with five Army Reserve and one Army National Guard support units—military intelligence, movement control, combat camera, medical, and human resources—located at the combined pre- and post-mobilization training center Fort Dix, New Jersey, and three active component Marine Corps combat logistics battalions from the two Marine Corps Divisions located in the continental United States that were preparing to deploy to either Iraq or Afghanistan, as well as four of Fort Hood’s active component Army support battalions that have recently returned from deployment. To conduct these discussion sessions, we traveled to one Army installation and three Marine Corps installations in the continental United States from August 2009 through December 2009 and conducted telephone discussions with representatives from one active duty Army installation in February 2010. In selecting units to speak with, we asked the service headquarters and force providers to identify all support units that would be in pre-mobilization or predeployment training during the time frame of our visit. The basic criteria used in selecting these units was that they were an Army or Marine Corps support unit participating in pre-mobilization or predeployment training and preparing to deploy to or recently redeployed from either Iraq or Afghanistan. Thus, our selection was limited since the time frame was so narrow. Once units were identified, we spoke with the unit command elements and senior enlisted servicemembers from nine support units that were available at the individual sites we visited. Overall, we spoke with Army and Marine Corps support units preparing to deploy to Iraq and Afghanistan, and within these units, some servicemembers who had previously deployed to Iraq or Afghanistan. We also spoke with four available active component Army support unit representatives who had recently returned from Iraq. Topics of discussion during the sessions included development and implementation of unit training plans, verification of training completion, and equipment and manning challenges that impact training. We also administered a short questionnaire to participants in the senior enlisted discussion sessions to obtain their feedback on the combat skills training their unit received. Comments provided during the discussion groups, as well as on the questionnaire, cannot be projected across the entire military community because the participants were not selected using a generalizable probability sampling methodology. To validate information we heard in the discussion groups, we interviewed the unit’s higher headquarters, where available, as well as officials from the training commands and service headquarters and force providers. Table 1 outlines all of the organizations we interviewed during the course of our review. We conducted this performance audit from August 2009 through February 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Sharon L. Pickup, (202) 512-9619 or [email protected]. In addition to the contact named above, key contributors to this report were Michael Ferren (Assistant Director), Susan Ditto, Lonnie McAllister, Terry Richardson, Michael Silver, Christopher Watson, Natasha Wilder, Erik Wilkins-McKee, and Kristy Williams.
|
In conventional warfare, support forces such as military police, engineers, and medical personnel normally operate behind the front lines of a battlefield. But in Iraq and Afghanistan--both in U.S. Central Command's (CENTCOM) area of responsibility--there is no clear distinction between front lines and rear areas, and support forces are sometimes exposed to hostile fire without help from combat arms units. The House report to the National Defense Authorization Act for fiscal year 2010 directed GAO to report on combat skills training for support forces. GAO assessed the extent to which (1) Army and Marine Corps support forces are completing required combat skills training; (2) the services and CENTCOM have information to validate completion of required training; and (3) the services have used lessons learned to adjust combat skills training for support forces. To do so, GAO analyzed current training requirements, documentation of training completion, and lessons learned guidance; observed support force training; and interviewed headquarters officials, trainers, and trainees between August 2009 and February 2010. Army and Marine Corps support forces undergo significant combat skills training, but additional actions could help clarify CENTCOM's training requirements, ensure the services fully incorporate those requirements into their training requirements, and improve the consistency of training that is being conducted. CENTCOM has issued a list of training tasks to be completed, in addition to the services' training requirements, before deploying to its area of operations. However, there is confusion over which forces the CENTCOM requirements apply to, the conditions under which the tasks are to be trained, and the standards for successfully completing the training. As a result, interpretations of the requirements vary and some trainees receive detailed, hands-on training for a particular task while others simply observe a demonstration of the task. In addition, while the Army and Marine Corps are training their forces on most of CENTCOM's required tasks, servicemembers are not being trained on some required tasks prior to deploying. While units collect information on the completion of training tasks, additional actions would help higher level decision-makers assess the readiness of deploying units and servicemembers. Currently, both CENTCOM and the services lack complete information on the extent to which Army and Marine Corps support forces are completing required combat skills training. The Army has recently designated the Digital Training Management System as its system of record for tracking the completion of required training, but guidance concerning system implementation is unclear and the system lacks some needed capabilities. As a result, support forces are not fully utilizing the system, and are inconsistently tracking completion of individual and unit training using paper records, stand-alone spreadsheets, and other automated systems. The Marine Corps also uses inconsistent approaches to document training completion. Furthermore, as GAO reported in May 2008, CENTCOM does not have a clearly defined waiver process to provide visibility over the extent to which personnel are deploying to its area of operations without having completed its required training tasks. As a result, CENTCOM and the services have limited visibility over the extent to which servicemembers have or have not completed all required training. While trainers at Army and Marine Corps training sites have applied lessons learned information and made significant changes to the combat skills training they provide support forces, the changes to training have varied across sites. Army and Marine Corps doctrine requires the collection of after action reports, the primary formal vehicle for collecting lessons learned. Lessons are also shared informally, such as through communication between deployed forces and units training to replace them. While the services have these formal and informal means to facilitate the sharing of lessons learned information, trainers at the various training sites are not consistently sharing information about the changes they have made to their training programs. As a result, servicemembers are trained inconsistently and units that are deploying for similar missions sometimes receive different types and amounts of training.
|
The purpose of SORNA is to protect the public from sex offenders and offenders against children by providing a comprehensive set of sex offender registration and notification standards. These standards require convicted sex offenders, prior to their release from imprisonment or within 3 days of their sentencing if the sentence does not involve imprisonment, to register and keep the registration current in the jurisdictions in which they live, work, and attend school, and for initial registration purposes only, in the jurisdiction in which they were convicted, if such jurisdiction is different from the jurisdiction of residence. The registration agency also is to document the text of the provision of law defining the criminal offense for which the offender is registered; the criminal history of the offender, including dates of all arrests and convictions, and any other information SORNA or the Attorney General requires. In addition, jurisdictions are to maintain a jurisdiction-wide sex offender registry and adopt registration requirements that are at least as strict as those SORNA established. The length of time that convicted sex offenders must continue to update their registration is life, 25 years, or 15 years, depending on the seriousness of the crimes for which they were convicted and with possible reductions for maintaining a clean record. The frequency with which sex offenders must update or verify their information—either quarterly, semiannually, or annually—also depends on the seriousness of the crime. NCIC is an information system that provides law enforcement agencies with around-the- clock access to federal, state, and local crime data, including criminal record histories and wanted and missing person records. manage sex offender registration and notification activities are exclusively responsible for the inclusion, accuracy, and integrity of the information provided by their respective websites. SORNA and other federal laws identify certain points in time when sex offenders should be informed of their registration requirements and when relevant jurisdiction officials—that is, state, territorial, and tribal sex offender registry and law enforcement officials—should be informed that a sex offender has been released in their jurisdiction. For example, 42 U.S.C. § 16917 states that, shortly before the release of the sex offender from custody for the offense giving rise to the duty to register, an appropriate official must (1) inform the sex offender of that person’s duties under SORNA and explain those duties, (2) require the sex offender to read and sign a form stating the duty to register has been explained, and (3) ensure that the sex offender is registered. In addition, 18 U.S.C. § 4042(c) requires BOP and federal probation officers to (1) inform the sex offender of the requirements of SORNA and (2) notify the agency responsible for sex offender registration in the jurisdiction in which the sex offender will reside. BOP is required to take these actions if the offender receives a prison sentence; federal probation officers are required to take these actions when the offender is sentenced to probation. See app. II for additional information about these statutory notification requirements. On the basis of our analysis of a representative sample of 131 alien sex offenders under ICE supervision, and for whom ICE had a record of the alien’s complete date of birth, we estimate that as of September 2012, 72 percent of alien sex offenders were registered in the jurisdictions where they lived, 22 percent were not required to register, and 5 percent did not register but should have. Twenty-two percent of alien sex offenders in our sample (29 of 131) were not required to register in the states where they reside, according to the sex offender registration officials. Reasons these offenders were not required to register include the following: The specific offense did not require registration in those states or the offense had been committed before registration was required (20 offenders), the period during which the offender was required to register had ended (8 offenders), or the offender was deceased (1 offender). For example, 1 alien sex offender was convicted of a sex offense in 1997 and was required to register only while he was on probation, which ended in March 2003. Six other alien sex offenders who were convicted of various sex offenses were not required to register because their conviction or supervised release occurred prior to a statutory requirement to register. Another offender, in Texas, was convicted of operating a “sexually oriented business,” which does not require registration as a sex offender. However, 6 alien sex offenders in our sample (5 percent) should have been registered but were not, which means that, as of September 2012, an estimated 60 alien sex offenders under orders of supervision nationwide, for whom ICE had a record of their complete birthdays, were not registered but should have been. Law enforcement officials reported having no record of 3 of these 6 offenders, but the crimes these aliens committed should have triggered registration. The ICE-ERO field office did not inform 2 of the 6 alien sex offenders about their registration requirements, but did inform the remaining 4 offenders. However, officials at some field offices identified several reasons why they did not ensure that these offenders actually registered. First, the offender may have moved to another state and no longer resided in the area of responsibility for that particular field office. In this instance, it would be incumbent upon the field office that covers the jurisdiction where the offender currently lives to follow up with the offender regarding registration. Second, the officials explained that when aliens report to their deportation officer, the officer is required, among other things, to check NCIC to determine whether the alien has been arrested for any other crimes, the alien is wanted by another law enforcement agency, or there is a warrant for the alien’s arrest.ask the alien whether or not the alien registered as a sex offender. However, according to ICE-ERO, depending on the individual circumstances, failure to register may not be a sufficient basis to return the alien to ICE custody. Other state and federal correctional and supervision agencies are limited in the information they can provide to and about alien sex offenders to help ensure that these offenders are registered, but ICE-ERO may be in a position to help address these notification gaps. We found that ICE-ERO informs alien sex offenders who are removed from the country about potential registration requirements, but ICE-ERO does not consistently inform alien sex offenders who are released under ICE-ERO supervision about these requirements. Further, relevant jurisdiction officials may not be notified about the whereabouts of an alien sex offender when an alien sex offender is removed from the country or when an alien sex offender is released under ICE-ERO supervision, which could have an impact on jurisdictions’ ability to monitor these offenders if they return to the jurisdictions’ communities. ICE-ERO stated that it is assessing options to best accomplish the goal of sex offender notification programs, including incorporating notification requirements for all alien sex offenders released under ICE-ERO supervision. However, ICE-ERO has not identified a deadline for when it will complete its assessment of the various options, nor does ICE-ERO plan to notify jurisdictions when an alien sex offender is removed from the country. Federal and state correctional and supervision agencies have processes in place to inform sex offenders of their registration requirements and notify jurisdictions when sex offenders are released from criminal custody. However, there are gaps in the information that these agencies can provide regarding alien sex offenders who will be taken into ICE-ERO custody, and ICE-ERO may be in a position to help fill these gaps. For example, we found that BOP has a process in place to inform inmates who are sex offenders about their registration requirements at least 5 days prior to releasing them. Under SORNA, these offenders are then required to register in the jurisdiction where they will reside within 3 business days of being released from prison. However, alien sex offenders with final orders of removal who are transferred to ICE-ERO custody upon their release from prison are not able to register immediately. Rather, if these offenders are not removed from the country, they must wait until they are released by ICE-ERO to register. In these instances, it could be as long as 90 days or more from the time when alien sex offenders are informed of their registration requirements until they are actually able to register.who are responsible for providing guidance to jurisdictions and federal agencies on how to implement SORNA requirements, given the time that would have passed, it would be beneficial to remind alien sex offenders of their potential registration requirements upon their release from ICE-ERO custody. According to SMART Office officials— Federal probation officers, as well as probation officers in the three states included in our review, are also required to inform sex offenders under their supervision about their registration requirements; this includes alien sex offenders who are simultaneously on probation while under ICE-ERO However, not all alien sex offenders are on probation at the supervision. same time that they are under ICE-ERO supervision, in which case these offenders may not be informed of their potential registration requirements upon release from ICE-ERO custody. 42 U.S.C §16917(a) and 18 U.S.C. § 4042(c)(3). We were not able to determine whether any of the 131 alien sex offenders in our sample were on probation while under ICE supervision because ICE does not maintain this information in its case management system. register the offenders while they are incarcerated and before they are transferred to ICE-ERO custody; however, federal correctional facilities are not able to do so. SORNA states that sex offenders shall initially register before completing a sentence of imprisonment with respect to the offense giving rise to the registration requirement and that an appropriate official shall, shortly before release of the offender from custody for such an offense, ensure that the offender is registered. State registry and local law enforcement officials we interviewed in Florida, Maryland, and Minnesota said that correctional facilities in their states register sex offenders, including alien sex offenders, prior to releasing them. Officials from two of these states also explained that for alien sex offenders who are released from the state correctional facility and immediately taken into ICE custody, the state correctional facility annotates this in the state registration system. Law enforcement officials stated that this enables them to follow up with ICE on the status of the alien sex offender, which helps them to ensure that the information on the status and location of these offenders is current. BOP, on the other hand, is not able to register sex offenders, including alien sex offenders who will be taken into ICE-ERO custody, prior to their release from prison because federal agencies do not have the authority to register sex offenders; rather, that authority lies exclusively with the states, territories, and tribes. However, according to BOP, even if BOP had the authority to register sex offenders, BOP would not have to do so for alien sex offenders who will be taken into ICE-ERO custody upon their release from a BOP facility. BOP considers this to be a transfer, not a release, from BOP custody to ICE-ERO custody, in which case BOP would not be required to ensure that the offender is registered prior to the offender leaving the BOP facility. notify registry and law enforcement officials in the jurisdiction where the offender will reside that the offender has been released from custody. ICE-ERO stated that from its perspective, when an alien offender is taken into ICE custody following the offender’s release from a BOP facility, this is not a transfer. Rather, the offender’s criminal sentence is considered to be complete when BOP releases the offender, and ICE is exercising its independent authority to take the alien offender into custody thereafter. are taken into ICE-ERO custody upon their release from a BOP facility, in part because BOP does not know where ICE-ERO will detain the offender. BOP officials stated that ICE-ERO would be in the best position to notify jurisdiction officials that the agency has a sex offender in its custody because ICE-ERO would know where the offender is being detained and ultimately where the offender will be released. Figure 1 illustrates the gaps in notifications provided to and about alien sex offenders who are removed from the country or released under ICE-ERO supervision. ICE-ERO has a mechanism in place to inform alien sex offenders who are being removed from the country about potential registration requirements. In response to concerns raised by the U.S. Marshals Service that alien sex offenders who were being removed from the country were not aware of registration requirements, ICE-ERO, in consultation with DOJ, established a mechanism to inform all removed offenders about these requirements. Persons who are being removed from the United States are required to sign one of two forms—Warning to Alien Ordered Removed or Deported (Form I-294) or Notice to Alien Ordered Removed/Departure Verification (Form I-296)—which are used to provide information to aliens such as the length of time they are prohibited from reentering the United States, among other things. In early 2012, ICE-ERO added a notice to these two forms that explained that alien sex offenders must register if they ever return to the United States, and failure to do so could result in prosecution. Officials from the U.S. Marshals Service— which is responsible for investigating cases in which sex offenders fail to register—stated that having this notification mechanism in place will improve their ability to provide evidence to support the prosecution of offenders who fail to register because it is important for the prosecutor to demonstrate that the offender was aware of the registration requirements. Information about additional steps that ICE has taken, or could take, to determine what, if any, responsibility ICE-ERO has with regard to informing alien sex offenders of their registration requirements was omitted because ICE considered it to be FOUO. to alien sex offenders under order of supervision to inform these offenders about potential registration requirements. Before releasing an alien under order of supervision, ICE-ERO requires aliens to review and sign Department of Homeland Security (DHS) Form I-220B, Order of Supervision, which explains the alien’s conditions of release. The form also allows for additional conditions to be identified in an addendum. The addendum includes the following condition for aliens convicted of a sex offense: “That you register as a sex offender, if applicable, within 7 days of being released, with the appropriate agency(s) and provide ICE with written proof of such within 10 days.” Officials from two of the five ICE- ERO field offices included in our review told us they have an office policy in place that requires deportation officers to inform alien sex offenders under supervision about potential registration requirements. Both offices use the addendum to the Form I-220B to inform these offenders. However, as part of a broader effort that began in 2009 to review and revise ICE’s policy on reporting requirements under orders of release on recognizance and orders of supervision, ICE-ERO officials indicated that they must take additional steps before finalizing pending revisions. For example, revisions to the Form I-220B must be put through the agency’s formal clearance process before a revised version can be published. Also, given the uncertainty surrounding its legal role in informing alien sex offenders in its custody about potential registration requirements, the agency needs to assess whether there is a legal obligation for ICE-ERO to notify alien sex offenders of their requirements to register. According to ICE officials, if ICE-ERO determines that there is no such obligation, it will then decide whether or not to retain, as a matter of policy, the language in the Form I-220B addendum regarding sex offender registration. SORNA states that, shortly before the release of the sex offender from custody for the offense giving rise to the duty to register, an appropriate official must inform the sex offender of that person’s duties under SORNA, which would include registration, and explain those duties. According to SMART officials, other law enforcement agencies, including state correctional and probation agencies, have information and notification processes, even though the agencies are sometimes not explicitly required to do so by law. SMART officials said that these agencies have taken these actions in part because of an overall responsibility to assist other law enforcement when possible. ICE-ERO’s efforts are positive steps that should help address the uncertainty as to whether SORNA requirements to notify sex offenders of their duty to register apply to ICE-ERO. However, ICE-ERO began its review 4 years ago and has not identified a deadline for when it will finalize its decision on use of the Form I-220B addendum for providing sex offender registration notifications. Standard practices for project management state that managing a project, such as ICE-ERO’s review, A deadline would involves developing a timeline with milestone dates. help ensure timely completion of ICE-ERO’s review of the Form I-220B addendum, which is important because until the review is complete, there will continue to be uncertainty as to whether and how ICE-ERO should be notifying alien sex offenders who are released under order of supervision of their duty to register. Project Management Institute’s The Standard for Program Management©. ICE-ERO also does not notify sex offender registry and law enforcement officials when an alien sex offender is removed from the country or released under supervision, in part because ICE-ERO officials stated that the extent to which ICE-ERO has the authority or responsibility to do so is questionable. These officials also stated that contacting local jurisdictions would require significant field office resources and modifications to deportation officer duties. Sex offender registry and local law enforcement officials that we contacted in the three states in our review said that the officials are not always aware of when ICE-ERO removes alien sex offenders or releases them under ICE-ERO supervision. to ensure that these offenders are registered or that their registration information is current. Sex offender registry and law enforcement officials from two states said that, for alien sex offenders who they know are in ICE custody, the officials typically contact ICE on their own initiative to ask about the status of these offenders. Sex offender registry officials in another state said that even though they have an ICE agent colocated with them, the agent does not consistently inform them when ICE deports or releases an alien sex offender, in part because the agent has other responsibilities and notifying the state registry of the status of alien sex offenders in ICE custody is a collateral duty. These registry officials said that they typically become aware of an alien sex offender who has been released from ICE custody if (1) the offender registers with local law enforcement officials on the offender’s own initiative, (2) the offender’s probation officer notifies them, or (3) they check on the status of the offender—as they routinely do—and determine that the offender has absconded. These three states are Minnesota, Florida, and Maryland. We chose Minnesota and Florida because they are among the states where the largest number of alien sex offenders in our sample who were not in the public website reside. We chose Maryland because local law enforcement officials had raised concerns about not being notified of alien sex offenders who are removed from the country or released under order of ICE- ERO supervision. office stated that they know of instances—although they were not able to provide the specific number—when they expended resources searching for an alien sex offender who they thought had absconded only to find that ICE-ERO had the offender in custody, removed the offender, or released the offender. Officials from another local law enforcement agency said that ICE should notify the state registry of the alien sex offenders in ICE custody so that state and local law enforcement officials are aware of the location of the alien sex offenders and do not expend resources looking for them. State registry and local law enforcement officials in our review also provided examples of how their lack of awareness about removed alien sex offenders, in particular, could pose a risk to public safety. For example, registry officials in one state said that there have been instances when they were not aware that an alien sex offender had been removed from the country until the sex offender subsequently returned to the United States, committed another offense, and ended up back in the state criminal justice system. Local law enforcement officials from another state described an instance in which they were not aware that an alien sex offender had been removed from the country until the offender returned to the United States and was subsequently arrested for committing another sex offense against the same child that he had previously victimized. According to the data that ICE-ERO provided to us, of the 4,359 alien sex offenders who were removed from the country between January and August 2012, 220 of them (5 percent) had previously been removed but subsequently returned to the United States and were arrested for another offense. As we reported in February 2013, the FBI is in the process of developing a mechanism by which the U.S. Marshals Service and relevant jurisdiction officials will be notified when a sex offender who has been registered in the United States legally reenters the country. 18 U.S.C. § 4042(c). way to ensure that they are aware of alien sex offenders whom ICE-ERO has in custody, removed from the country, or released under supervision is for ICE-ERO to tell the officials. ICE-ERO plans to review options to help address notification gaps pertaining to alien sex offenders who are released under order of supervision, but has not established a deadline for when it will complete this review. ICE-ERO, however, does not plan to consider options for notifying jurisdictions when an alien sex offender is removed from the country, which, as discussed previously, could have an impact on a jurisdiction’s ability to monitor these individuals if they return to the United States. As a result of our review, in May 2013, officials from ICE-ERO, the U.S. Marshals Service, and the SMART Office met to discuss notification gaps with regard to registration of alien sex offenders and options for addressing these gaps. However, the agencies were not able to agree to a solution at that time, in part because of ICE-ERO officials’ concerns about their organization’s lack of authority and responsibility regarding sex offender registration. Officials we interviewed from the U.S. Marshals Service and the SMART Office stated that ICE-ERO was in the best position to inform alien sex offenders about potential registration requirements, and to notify relevant jurisdiction officials—either state registry or law enforcement officials—when an alien sex offender is removed from the country or released, because ICE-ERO is the last federal agency that has had contact with these offenders and releases these offenders from custody into the community. However, in addition to uncertainty regarding ICE-ERO’s authority and responsibility, ICE-ERO officials identified other concerns about providing these notifications. Specifically, officials we interviewed in the five ICE- ERO field offices included in our review said that because they are responsible for supervising such a large number of aliens—anywhere from 750 to 3,000 at any point in time—they would not have the time or resources to notify jurisdiction officials when an alien sex offender is released or removed from the country. However, as noted previously, alien sex offenders make up a relatively small fraction (5 percent) of aliens under ICE supervision, in which case providing these notifications may not pose a significant resource burden on ICE-ERO. Further, ICE- ERO officials as well as one of the five ICE-ERO field offices in our review said that they thought that, to notify jurisdiction officials, they would first have to confirm that the alien does in fact have to register in that state, which could be very time-consuming. However, under SORNA, the state is responsible for determining whether a convicted sex offender is required to register, in which case ICE-ERO would not have to do so prior to providing notice of the offender’s release. ICE-ERO officials also stated that SORNA also requires individuals convicted of certain crimes against children that are not sex offenses— such as kidnapping—to register as sex offenders. However, these officials explained that it would be difficult for deportation officers to determine whether aliens under their supervision were convicted of a crime that is not a sex offense but may require registration. We acknowledge that this could be a challenge and an issue that the SMART Office may be able to help ICE-ERO resolve. Moreover, officials from all five ICE-ERO field offices in our review said that even if they informed alien sex offenders of their registration requirements, the officials would not be able to take any action to enforce these requirements even when registering as a sex offender is a condition of release for aliens under ICE-ERO supervision. However, ICE-ERO could notify all offenders who are released on supervision, as it does for offenders who are removed from the country, and then state and local law enforcement would be responsible for enforcing registration requirements. Also, if ICE-ERO notifies jurisdiction officials of the offender’s release, these officials would be able to identify those offenders who did not register after their release; ICE-ERO would not have to assume this responsibility. Finally, officials from two ICE-ERO field offices in our review said that they would not know who, specifically, to contact at the state registry to notify it that ICE- ERO is deporting or releasing an alien sex offender. However, the SMART Office maintains points of contact for each state, territory, and tribal sex offender registration agency, which the SMART Office could provide to ICE-ERO. In July 2013, an ICE-ERO official stated that ICE-ERO will begin reviewing options to accomplish the goal of sex offender notification, to include efforts to inform alien sex offenders of their potential registration requirements and to notify jurisdictions of alien sex offenders who are released under order of supervision. However, ICE-ERO did not provide a deadline for when it plans to complete its review of the various options. Standard practices for project management state that managing a project, such as ICE-ERO’s review, involves developing a timeline with milestone dates. Further, Standards for Internal Control in the Federal Government call for agencies to ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. By developing a deadline for when it will complete its assessment of options for providing notifications to and about alien sex offenders, ICE-ERO will help to ensure that any public safety concerns that may arise as a result of the current notification gaps—such as law enforcement officials being unaware of sex offenders living in their jurisdictions—are mitigated in a timely manner. Finally, communicating the results of its assessment with federal stakeholders will help provide clarity going forward with regard to who has responsibility for notifying alien sex offenders of their potential registration requirements. Although ICE-ERO plans to address notification gaps regarding alien sex offenders under order of supervision, it does not plan to consider options for notifying relevant jurisdictions when an alien sex offender is removed from the country. ICE-ERO stated that it already updates NCIC—which is routinely accessed by law enforcement officials—when an alien sex offender is removed, including the date of the removal. However, if law enforcement officials were last told that the alien sex offender was in ICE- ERO custody, they may not have a reason to search NCIC to determine the status of the offenders. Given the threat that alien sex offenders who are removed from and return to the United States may pose to public safety, developing an appropriate mechanism for informing relevant jurisdictions when an alien sex offender has been removed from the country will assist jurisdiction officials in ensuring that all alien sex offenders have been registered. This will facilitate the monitoring of these sex offenders in the event that they return to the United States. Such notification would also prevent jurisdictions from spending limited resources trying to locate these offenders because they were not aware that the offenders had been removed from the country. Other federal agencies, including the SMART Office, U.S. Marshals Service, and BOP, may have resources and information that are useful for ICE-ERO in developing a mechanism for notifying relevant jurisdictions when an alien sex offender is removed from the country. For example, SMART maintains contact information for all state, territorial, and tribal registry agencies. Also, internal control standards call for agencies to ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency’s achieving its goals. Therefore, consulting with these agencies could be beneficial for ICE-ERO in developing this notification mechanism. Without mechanisms in place to consistently inform alien sex offenders who are released under ICE-ERO supervision about their registration requirements, and consistently notify jurisdictions when an alien sex offender has been removed from the country or released under supervision, the risk that alien sex offenders will reside in U.S. communities without being registered is increased. ICE-ERO is in the process of a review to determine whether continued use of the Form I- 220B addendum as a means to notify alien sex offenders of their potential registration requirements is warranted; however, ICE-ERO has not set a deadline for timely completion of this review. A deadline will help enhance accountability for completion of this effort, which is important because until this review is completed, there will continue to be uncertainty as to whether and how ICE-ERO should be notifying alien sex offenders who are released under order of supervision of their duty to register. In addition, a time frame for when ICE-ERO will complete its assessment of options for notifying alien sex offenders of their potential registration requirements will help provide accountability for completing this important effort. Also, communicating the results of ICE-ERO’s assessment with federal stakeholders will help provide clarity going forward with regard to who has responsibility for notifying alien sex offenders of their potential registration requirements. Moreover, given the threat that alien sex offenders who are removed from and return to the United States may pose to public safety, developing an appropriate mechanism for informing relevant jurisdictions when an alien sex offender has been removed will assist jurisdiction officials in ensuring that all alien sex offenders are registered. This will facilitate the monitoring of these sex offenders in the event that they return to the United States. Such notification would also prevent jurisdictions from spending limited resources trying to locate these offenders because they were not aware that the offenders had been removed from the country. We recommend that the Director of ICE take the following two actions: direct ICE-ERO to establish a deadline to ensure timely completion of its review of the Form I-220B addendum and direct ICE-ERO to establish a deadline for when it will complete its assessment of options for informing alien sex offenders who are released under order of supervision about their potential responsibility to register and communicate the results of its assessment with federal stakeholders. We recommend that the Secretary of Homeland Security direct ICE-ERO, in consultation with the SMART Office, the U.S. Marshals Service, and BOP, to develop an appropriate mechanism for notifying relevant jurisdictions when an alien sex offender has been removed from the country. We provided a draft of this report for review and comment to DHS and DOJ. We received written comments from DHS, which are reproduced in full in appendix III. DHS agreed with our recommendations in its comments. We also received technical comments from DHS and DOJ, which are incorporated throughout our report as appropriate. DHS agreed with our recommendations that ICE-ERO establish deadlines for when it will complete its review of the Form I-220B addendum and its assessment of options for informing alien sex offenders who are released under order of supervision about their potential registration responsibilities. DHS noted that ICE-ERO had taken steps to combine Form I-220A (Order of Release on Recognizance) and Form I-220B (Order of Supervision) into one comprehensive Form I-220 (Order of Release on Recognizance or Order of Supervision). However, ICE-ERO intentionally delayed publication of this new form, and the associated directive, to take into account any recommendations resulting from our review. DHS also stated that ICE-ERO is currently working with the SMART Office to explore ways in which the goals of SORNA may be better addressed through improved coordination between the two agencies. ICE-ERO plans to complete its review and assessment by October 31, 2013. Establishing such a deadline for the completion of these efforts will help ensure that ICE-ERO can be held accountable for identifying and effectuating any actions they deem appropriate to help ensure that alien sex offenders are indeed registered. As part of our process for following up on agencies’ efforts to implement our recommendations, we will continue to monitor ICE-ERO’s progress in completing its assessment and review by the established deadline. DHS also concurred with our recommendation that ICE-ERO, in consultation with the SMART Office, the U.S. Marshals Service, and BOP, develop an appropriate mechanism for notifying relevant jurisdictions when an alien sex offender has been removed from the country. DHS noted, however, that as ICE-ERO considers options, it will also determine whether such notification can be accomplished without adversely affecting ICE’s mission, given the potential impact on resources. ICE-ERO also plans to complete its assessment of these options by October 31, 2013. Notifying jurisdictions when an alien sex offender is removed from the country will enable them to register these offenders, in which case law enforcement officials will be able to monitor these offenders if they ever return to the United States. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO web-site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. This report addresses the following objectives: (1) To what extent are alien sex offenders under the Enforcement and Removal Operations division of U.S. Immigration and Customs Enforcement (ICE-ERO) order of supervision registered as sex offenders? (2) To what extent are alien sex offenders who are removed from the country or released under an ICE-ERO order of supervision informed of registration requirements, and state sex offender registry and law enforcement officials notified about these offenders? To address our objectives, we requested that ICE-ERO provide the names and dates of birth for all alien sex offenders who were under orders of supervision as of September 2012. We chose this date because we requested this information as part of a separate review and this date provided us with current information at the time. ICE-ERO provided us with the names for 2,837 alien sex offenders under orders of supervision as of September 2012. However, ICE-ERO was able to provide us only with a complete date of birth—which is important for verifying the identity of these individuals—for 1,369 of these alien offenders. We drew a random probability sample of 137 of the 1,369 alien sex offenders with complete dates of birth. We subsequently found that six individuals in our sample should not have been included in the population of alien sex offenders under supervision, resulting in a final sample size of 131 and an estimated total population of 1,309. We determined whether each alien sex offender, as of March 2013, was registered in the state where he or she resides using the steps described below. Percentage estimates derived from this sample have margins of error at the 95 percent confidence level of plus or minus 8.08 percentage points.the reliability of the data ICE provided by questioning knowledgeable agency officials and reviewing the data for errors and anomalies. We determined that the data were sufficiently reliable for our purposes. To determine whether the 131 alien sex offenders in our sample were registered in the states where they reside, we first searched the National Sex Offender Public Website (public website) to determine which of these offenders were included. All persons included on the public website are also registered with their respective states. On the basis of the search results, we divided the alien sex offenders into three categories: (1) definitely included in the public website, meaning there was an exact match on the name and date of birth for the alien sex offender in the public website; (2) possibly included in the public website, meaning there was a partial or similar name, date of birth, or age in the public website (e.g., J. Smith as opposed to John Smith), but not an exact match; and (3) definitely not included in the public website, meaning the public website did not include the offender’s exact name or date of birth or even a partial or similar name, date of birth, or age. We determined that of the 131 alien sex offenders in our sample, 51 (39 percent) were definitely included in the public website, 16 (12 percent) were possibly included, and 64 (49 percent) were definitely not included. We asked ICE-ERO to provide us with the current addresses for the 80 alien sex offenders who were possibly included or definitely not included in the public website; these offenders were located in 27 states. We contacted sex offender registration officials in each of the 27 states to ask whether the officials were aware of these offenders; whether the offenders were registered with the state; and, for any offenders who were not registered, an explanation for why they were not. To address our second objective, we reviewed the Sex Offender Registration and Notification Act of 2006 (SORNA), other applicable laws, and guidelines developed by the Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART) Office to obtain information on federal sex offender registration requirements. We also met with officials from ICE-ERO Executive Information and Reporting Unit—which is responsible for administering and coordinating ICE-ERO’s policy development, review, clearance, and information disclosure functions—to obtain information on how they determine whether an alien in ICE-ERO custody is a sex offender and any actions they take to help ensure that alien sex offenders who are released under ICE-ERO supervision are registered. In addition, we conducted phone interviews with ICE-ERO supervisory detention officials; U.S. Marshals Service officials; state sex offender registry officials; and local law enforcement officials in Minnesota, Florida, and Maryland to inquire about actions they take to help ensure alien sex offenders are registered and how they become aware of alien sex offenders who live in their jurisdiction. We selected Minnesota and Florida because these are the states where the largest number of alien sex offenders in our sample who were not included in the public website reside. We selected Maryland because local law enforcement officials in this state had raised concerns about registration of alien sex offenders during our prior work, which was completed in February 2013. In addition, we conducted phone interviews with ICE-ERO field office directors and deputy directors in the field offices that either released or currently supervise the alien sex offenders in our sample who were not registered, but potentially should have been. We obtained information from the Administrative Office of the United States Courts and the Federal Bureau of Prisons (BOP) regarding their efforts to inform alien sex offenders about their registration responsibilities and notifying relevant sex offender registry and law enforcement officials about these offenders. In addition, we interviewed officials from the U.S. Marshals Service who are responsible for locating sex offenders who fail to register. We also met with the director and policy advisors for the SMART Office within the Department of Justice (DOJ) to obtain their perspectives on acceptable reasons for why alien sex offenders may not be registered in the state where they reside or included in the public website. The SMART Office is responsible for assessing states’, territories’, and tribes’ progress in implementing SORNA. We compared the sex offender notification requirements in SORNA and other federal statutes with the notifications that state and federal agencies provide to alien sex offenders to determine if there were any gaps. We then obtained perspectives from the federal, state, and local officials we interviewed on how best to address these gaps. We also compared efforts that ICE-ERO has under way regarding notifications to and about alien sex offenders with internal control standards pertaining to communication with stakeholders and program management standards that involve establishing milestone dates and deadlines. We conducted this performance audit from January 2013 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Sex Offender Notification Requirements in the Sex Offender Registration and Notification Act (SORNA) and Other Federal Statutes Notification requirement 42 USC § 16917—Duty to notify sex offenders of registration requirements and to register (a) In General. An appropriate official shall, shortly before release of the sex offender from custody, or, if the sex offender is not in custody, immediately after the sentencing of the sex offender, for the offense giving rise to the duty to register— (1) inform the sex offender of the duties of a sex offender under this title and explain those duties; (2) require the sex offender to read and sign a form stating that the duty to register has been explained and that the offender understands the registration requirement; and (3) ensure that the sex offender is registered. (b) Notification of Sex Offenders Who Cannot Comply with Subsection (a). The Attorney General shall prescribe rules for the notification of sex offenders who cannot be registered in accordance with subsection (a). 18 USC § 4042(c)(1)-(3)—Duties of Bureau of Prisons (c) Notice of Sex Offender Release— (1) In the case of a person described in paragraph (3), or any other person in a category specified by the Attorney General, who is released from prison or sentenced to probation, notice shall be provided to— (A) the chief law enforcement officer of each State, tribal, and local jurisdiction in which the person will reside; and (B) a State, tribal, or local agency responsible for the receipt or maintenance of sex offender registration information in the State, tribal, or local jurisdiction in which the person will reside. (2) Notice provided under paragraph (1) shall include the information described in subsection (b)(2), the place where the person will reside, and the information that the person shall register as required by the Sex Offender Registration and Notification Act. For a person who is released from the custody of the Bureau of Prisons whose expected place of residence following release is known to the Bureau of Prisons, notice shall be provided at least 5 days prior to release by the Director of the Bureau of Prisons. For a person who is sentenced to probation, notice shall be provided promptly by the probation officer responsible for the supervision of the person, or in a manner specified by the Director of the Administrative Office of the United States Courts. Notice concerning a subsequent change of residence by a person described in paragraph (3) during any period of probation, supervised release, or parole shall also be provided to the agencies and officers specified in paragraph (1) by the probation officer responsible for the supervision of the person, or in a manner specified by the Director of the Administrative Office of the United States Courts. (3) The Director of the Bureau of Prisons shall inform a person who is released from prison and required to register under the Sex Offender Registration and Notification Act of the requirements of that Act as they apply to that person and the same information shall be provided to a person sentenced to probation by the probation officer responsible for supervision of that person. In addition to the contact named above, Kristy Love, Assistant Director, and Edith Sohna, analyst-in-charge, managed this engagement. Kevin Craw and Frances Cook made significant contributions to the report. Michele Fejfar, Justin Fisher, Mary Catherine Hult, Michael Lenington, Linda Miller, Lara Miklozek, and Julie Spetz also provided valuable assistance.
|
ICE-ERO uses orders of supervision to release from custody criminal aliens--including sex offenders--who have been ordered to be removed from the United States, but cannot be removed for various reasons or detained indefinitely under U.S. Supreme Court precedent. In July 2006, SORNA was enacted, which established minimum standards for sex offender registration and notification. Congressional requesters asked GAO to assess registration of alien sex offenders. This report addresses the extent to which alien sex offenders (1) under ICE-ERO order of supervision are registered and (2) who are removed or released under ICE-ERO order of supervision are informed of registration requirements and relevant jurisdiction officials are notified about these offenders. GAO analyzed a representative sample of 131 of 1,309 alien sex offenders who were under orders of supervision as of September 2012. GAO also interviewed officials from ICE-ERO, the SMART Office, and other relevant federal, state registry and local law enforcement agencies. On the basis of GAO's analysis of a representative sample of 131 alien sex offenders under U.S. Immigration and Customs Enforcement (ICE) supervision, GAO estimates that as of September 2012, 72 percent of alien sex offenders were registered, 22 percent were not required to register, and 5 percent did not register but should have. According to officials, offenders were not required to register for various reasons, such as the offense not requiring registration in some states. Of the 6 offenders in GAO's sample that should have registered, officials from ICE's Enforcement and Removal Operations (ICE-ERO) field offices informed 4 of their registration requirements. However, officials at some of these field offices identified several reasons why they did not ensure that these offenders actually registered. For example, the offender may have moved and no longer resided in the area of responsibility for that particular field office. ICE had not informed the remaining 2 offenders of their registration requirements. Alien sex offenders are not consistently informed of potential registration requirements, and relevant jurisdiction officials--that is, state, territorial, and tribal sex offender registry and law enforcement officials--are not consistently notified when an offender is removed from the country or released. The Sex Offender Registration and Notification Act of 2006 (SORNA) and other federal laws identify when sex offenders and relevant jurisdiction officials should be notified. However, the agencies that have these notification responsibilities are limited in their ability to provide information to and about alien sex offenders, in part because they do not know when ICE-ERO will release or remove these offenders. ICE-ERO has a procedure in place to inform alien sex offenders who are being removed about potential registration requirements, but not alien sex offenders who are being released into the community under supervision, primarily because ICE-ERO is uncertain whether it has a responsibility to do so. ICE-ERO also does not consistently notify relevant jurisdiction officials when an alien sex offender is removed or released under supervision, for similar reasons. However, officials from the Department of Justice's Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART) Office said that state correctional facilities, in the interest of public safety, have notification processes in place, even though sometimes not required to do so. ICE-ERO is reviewing options for informing alien sex offenders under supervision about their potential registration requirements and notifying jurisdictions when alien sex offenders are released under supervision, but has not established a deadline for completing its review, which is inconsistent with project management standards. Without a deadline, it will be difficult to hold ICE-ERO accountable for providing these notifications. Further, ICE-ERO does not plan to notify relevant jurisdictions when an alien sex offender is removed. Providing such notification could help jurisdictions ensure public safety and avoid unnecessarily spending resources trying to locate the offender. This is a public version of a sensitive security report GAO issued in August 2013, which also included information about steps ICE has taken to determine its responsibility for informing alien sex offenders of their notification requirements. GAO recommends, among other things, that ICE-ERO (1) set a deadline for its review of options for providing notifications to and about alien sex offenders under supervision and (2) in consultation with SMART and others, consider options for notifying jurisdictions about removed offenders. ICE agreed with the recommendations.
|
The Navy uses a multilevel approach to ship repair and maintenance that, depending on the type and complexity of work, places responsibility at three different levels: organizational, intermediate, and depot. Depot-level repairs are the most complex, requiring the capabilities and technical skills of naval or private shipyards. During fiscal year 1997, the Navy employed about 22,000 personnel at its four naval shipyards. The shipyards are the Portsmouth Naval Shipyard, Portsmouth, New Hampshire; the Norfolk Naval Shipyard, Portsmouth, Virginia; the Puget Sound Naval Shipyard, Bremerton, Washington; and the Pearl Harbor Naval Shipyard, Pearl Harbor, Hawaii. The shipyards are not directly funded, but are paid by their customers—primarily the Pacific and Atlantic Fleets. The fleets are provided depot maintenance funds from the Navy’s operations and maintenance appropriation. Funding for the Navy’s depot-level ship maintenance and repair program in fiscal year 1998 is $2.1 billion. The Navy schedules its planned ship repair work for a 7-year period and updates this schedule annually. In developing the schedule, the Navy considers various factors, including (1) its policy to perform work of 6 months or less in the ship’s homeport, (2) statutory requirements regarding the public/private sector workload distribution, (3) the capabilities and capacity of each shipyard, and (4) expected funding and personnel levels. The published schedule shows the depot-level ship repair work assigned to each naval shipyard and the workload to be performed by the private sector. The Navy also develops an historically derived estimate of the direct labor staff-days each naval shipyard will expend on unscheduled (emergent) ship repair work and adds it to the schedule to arrive at the planned shipyard workload. The final schedule includes both scheduled and unscheduled work that requires temporary duty (TDY) assignments. For fiscal years 1995-97, about 70 percent of the total work assigned to naval shipyards was for scheduled repair work. The Navy uses TDY assignments primarily to perform work at homeports not located near a naval shipyard. The Navy considers shipyard personnel temporarily excess when they are required for planned future requirements and the time and cost of reducing and reacquiring needed personnel justifies retention. In some cases, personnel are considered temporarily excess for more than a year. Since private sector repair capabilities may be available at these locations, questions have been raised regarding the cost-effectiveness of sending naval shipyard personnel TDY to perform the work. During fiscal years 1995-97 naval shipyards spent an estimated 580,000 direct labor staff-days, valued at an estimated $134.1 million, on TDY assignments. TDY travel, per diem, and other related travel costs amounted to an additional $59 million. About 5.5 percent of the shipyards’ total direct labor staff-days were spent on TDY assignments. The Navy cites two reasons for using TDY assignments. First, such assignments are required to perform work at locations where no local public or private sector shipyards have the required ship repair capabilities. Second, the Navy believes that using temporarily excess shipyard workers on temporary duty assignments is cost-effective, even when there is a local private sector capability. The Navy performs work at locations without a naval shipyard to comply with its homeporting policy and when it is not practical to perform the work at public or private shipyards. The amount of TDY assignments depends on several interrelated factors, including a shipyard’s proximity to homeported ships, the number and types of ships assigned to each homeport, the type of repair or maintenance needed, the ability of private shipyards at or near the homeport to perform required repairs, and the number of temporarily excess naval shipyard personnel. The naval shipyards’ financial and management information systems do not identify the purpose for specific TDY assignments, and the Navy could not provide the data needed to identify the exact number of TDY assignments for each reason. The Navy performs ship repair TDY work at locations where it believes the necessary capability to perform the work is not locally available. The Navy performs work at these locations primarily to comply with its homeporting policy and also when it is not practical to bring ships to the shipyard. For example, a substantial amount of TDY has been for nuclear submarine repair work at San Diego, where there is no local naval shipyard or private sector nuclear repair capability. Also, local capability is usually not considered when naval shipyard warranty work is involved and for advanced planning prior to a ship going to a naval shipyard for repairs. Because of data limitations, we could not identify the exact number of TDY assignments, but available data supports Navy officials’ judgments that most TDY assignments are performed because there is no local capability. For example, because San Diego does not have nuclear repair capability, all nuclear submarine repair work is performed by naval shipyard workers at San Diego. The total submarine work of 145,000 staff-days represents about 25 percent of the total TDY staff-days for fiscal years 1995-97. In some cases, the Navy believes it is cost-effective to send shipyard workers that are considered temporarily excess on TDY assignments to locations where there is a local private sector capability. The Navy reasons that the excess workers would have to be paid whether or not they are working and that the cost of travel and per diem is the only additional cost of using the excess workers. The travel and per diem costs are generally less than local private sector labor rates. Shipyard workers become temporarily excess when there is a reduction in the naval shipyards’ originally scheduled and budgeted workload for such reasons as ship deployment extensions, reductions in the scope of the projected ship repairs, force level changes, and funding reductions. In some cases, expected future workloads are used to justify retaining some excess shipyard personnel. Excess personnel are retained when the Navy determines that the excess is temporary and that the time and cost of reducing and reacquiring the needed personnel justifies retention. This is especially true when workload reductions take place during the fiscal year in which the work is scheduled to be performed. In such cases, naval shipyard personnel levels are set for the year, and according to the Navy, it is very difficult to make major adjustments to personnel levels due to Civil Service regulations. For example, during the latest reductions in force at the four naval shipyards, the reductions took about 12 months to complete, from initial planning to the time the employees were actually removed from the shipyards’ payrolls. The Navy’s policy to perform all ship repair work of 6 months or less at the ship’s homeport substantially increased the amount of TDY ship repair work performed in locations without a naval shipyard. Because crews remain with their ships when the ships need maintenance and repairs, the policy is to improve crew retention and the quality of life by reducing time away from homeports. Since many of the Navy’s 23 ship homeports are not located near one of the four naval shipyards, the work is often performed by naval shipyard personnel on TDY assignments. Figure 1 shows the location of the four naval shipyards and some U.S. homeports. The number and type of ships located at each of the Navy’s 23 homeports is shown in appendix I. As shown in table 1, a large number of ships are homeported at or near the Norfolk and Pearl Harbor Naval Shipyards and provide each with a large potential workload for which temporarily excess shipyard personnel can be effectively used. The temporarily excess personnel can work on ships homeported in the area without being on TDY status. The two shipyards, however, also use the personnel to do some work that requires TDY assignments. On the other hand, relatively few ships are homeported near the Portsmouth and Puget Sound Naval Shipyards. As a result, these shipyards perform more work that requires TDY. Table 2 shows, by shipyard, the percent of total direct labor staff-days each naval shipyard spent on TDY assignments during fiscal years 1995-97. As table 2 shows, Portsmouth and Puget Sound Naval Shipyards used about 480,000 direct labor staff-days, or about 83 percent of the estimated 580,000 direct labor staff-days naval shipyards used on TDY assignments during fiscal years 1995-97. Table 3 shows, by shipyard, the reported direct labor costs of TDY assignments and the related travel costs for fiscal years 1995-97. Portsmouth and Puget Sound Naval Shipyards expended about $164.6 million of the $193.1 million, or about 85 percent of the total TDY costs (direct labor costs plus travel costs). As noted earlier, available data indicates that most TDY assignments are based on the rationale that no local capability exists. In those cases where there is a local private sector capability, the cost-effectiveness rationale for TDY assignments is valid to the extent that naval shipyard personnel are temporarily excess. There is excess capacity and personnel in some naval shipyards. The Navy is retaining the excess personnel to meet anticipated future requirements. Meanwhile, the Navy is using TDY assignments and is reallocating work from the private sector to the naval shipyards to make maximum use of the excess shipyard personnel. The Navy states that most TDY assignments are made because the required ship repair capabilities do not exist locally. In these cases, the Navy reasons that cost-effectiveness is not an issue because there is no practical alternative. As noted earlier, the Navy cites the use of TDY assignments to perform nuclear submarine repairs at San Diego as an example where no local private shipyard has nuclear repair capability. We agree that there may not be a practical alternative to some TDY assignments, but the Navy does not identify the rationale for each of its TDY assignments or provide the basis for any determination that there is no other practical alternative. In the case of San Diego, we found no basis to question the Navy’s statement that no local private shipyard has the necessary nuclear repair capabilities; however, there may be other practical alternatives that are not being considered. For example, Newport News Shipbuilding, a nuclear repair capable private shipyard in Virginia, has established a presence in San Diego through its recent purchase of Continental Marine Industries. TDY assignments to locations with a private sector capability are likely to be cost-effective when shipyard personnel are temporarily excess. When naval shipyard personnel are temporarily excess, their cost is considered fixed and will be paid whether or not the personnel are performing repair work. We have reported that when labor costs are fixed, the only added costs to the government for the TDY assignments are travel, per diem, and other related costs. To determine the cost-effectiveness of TDY assignments, these travel-related costs would have to be compared to the average private shipyard staff-day rate for performing ship repairs. We examined several private shipyard staff-day rates and naval shipyard TDY costs and found that when the private shipyards’ staff-day rates were compared only to the naval shipyards’ TDY costs, the private shipyards’ costs were always higher, usually substantially higher. For example, the Puget Sound Naval Shipyard estimated that its average daily travel cost per worker for scheduled TDY work in San Diego in 1997 was about $116, while the average daily rate per private shipyard worker in San Diego was about $330. Assuming the productivity of both the public and private sector personnel are fairly comparable, it would be cost-effective from a cost and operational standpoint to perform the work using temporarily excess personnel. TDY assignments are not likely to be cost-effective when the naval shipyards’ projected long-term workloads do not support existing personnel levels and local private shipyards are willing and capable of doing the work. In this case, both the naval shipyards’ direct labor costs and travel costs should be considered in making cost-effectiveness determinations. To illustrate, in the earlier example, the Puget Sound Naval Shipyard’s staff-day rate of $474 for fiscal year 1997 would have to be added to the $116 daily travel and per diem costs and the resulting $590 staff-day rate compared to the private sector’s rate of about $330 per day. Clearly, it would not be cost-effective for TDY shipyard personnel to do the work. Navy officials commented that naval shipyards have reduced personnel levels when long-term workload projections indicated a need to do so. In its fiscal years 1996-2001 business plan, the Defense Depot Maintenance Council showed large amounts of excess capacity at some of the naval shipyards. Table 4 shows the percent of expected excess capacityreported for each naval shipyard for fiscal years 1998-2001. Included in the reported excess capacity calculations are workloads such as several major nuclear submarine refuelings that were later canceled. Such cancellations further increase excess capacity and personnel. Also, the calculations include shipyard workloads that require TDY assignments. Without these workloads, the reported excess capacity and the availability of shipyard personnel would be higher. Unless additional workloads are identified, the personnel will be excess to the shipyard. Excess naval shipyard capacity exists even though the Navy closed four naval shipyards through the Base Realignment and Closure process and reduced the personnel levels at the four remaining naval shipyards during fiscal years 1991-97 from about 36,000 to 22,000, a 38-percent reduction. The Navy believes that it needs to retain its current shipyard capacity and associated personnel levels to meet anticipated future requirements. Meanwhile, the Navy is using TDY assignments and is reallocating work from the private sector to more effectively use its excess capacity and personnel. For example, when three nuclear attack submarine refuelings scheduled for the Portsmouth Naval Shipyard were canceled, the Navy, rather than further reducing personnel, decided to provide the shipyard with ship repair workload either previously located in other naval shipyards or in the private sector. For fiscal years 1997-99, this workload included work associated with the repair of submarines homeported in Groton, Connecticut. In the past, part of this work was performed by Electric Boat, a private nuclear-capable shipyard located in Groton, and part was done by Portsmouth Naval Shipyard personnel on TDY assignments. However, for the last 3 years, the Portsmouth Naval Shipyard has been assigned all the depot-level workload at Groton. The Navy believes this assignment of TDY workload to Portsmouth is cost-effective because it needs to retain skilled personnel to perform planned submarine refuelings from fiscal year 1999 to 2005. Beginning in fiscal year 2000, the Navy plans to return part of the Groton workload to the private sector. The Navy’s plan to use TDY assignments and reallocate private sector workloads to the naval shipyards to make effective use of excess shipyard capacity and temporarily excess personnel appears reasonable. However, for TDY assignments to homeports without required ship repair capabilities, other practical alternatives may warrant consideration, such as making greater use of the private sector. A reduction in the number of planned labor-intensive refuelings of nuclear attack submarines and the homeporting of up to three nuclear aircraft carriers in San Diego could substantially increase future TDY assignments. Other factors that could affect the extent of future TDY assignments include potential reductions in the number of Navy ships, the regionalization of the Navy’s ship maintenance, and another round of base closures. The Navy has retained significant excess capacity at the Portsmouth Naval Shipyard to ensure that it, along with the Pearl Harbor and Norfolk Naval Shipyards, can handle the refueling of 11 nuclear attack submarines during fiscal years 1999-2005. Each refueling requires about 300,000 staff-days of work and costs about $215 million. If these refuelings are done as scheduled, the number of excess personnel available for TDY will be reduced. For example, at Portsmouth, about 32 percent of the fiscal year 1998 planned workload will require TDY assignments, but only 22 percent of the fiscal year 1999 planned workload will require TDY assignments because a nuclear attack submarine refueling is scheduled. If the refuelings are not done and shipyard capacity and associated personnel reductions are not made, TDY assignments are likely to increase. Since fiscal year 1993, the Navy has reduced planned submarine refuelings. For example, although the Navy planned to refuel 32 nuclear attack submarines during fiscal years 1993-2005, it has canceled 17 refuelings. Of the remaining 15 refuelings, four have been completed and the remaining 11 have been scheduled. Because the Navy reduced the number of refuelings, Portsmouth Naval Shipyard personnel were assigned to perform submarine repair work at the Groton and San Diego homeports. As a result, the shipyard’s TDY assignments increased substantially during this time. Further reductions in the number of planned refuelings would substantially decrease the on-site workloads planned for three naval shipyards, especially Portsmouth. By the end of fiscal year 2005, the Navy anticipates that as many as three nuclear aircraft carriers could be homeported at San Diego. The percentage of depot-level maintenance to be done by public and private shipyards was not settled at the time of our review; however, if the Navy does the work as planned, its use of TDY assignments will increase substantially. Because no private shipyard in San Diego currently has nuclear repair capabilities, the Navy plans to use personnel on TDY from Puget Sound Naval Shipyard, starting in October 1998, to do the nuclear work on the U.S.S. Stennis, the first nuclear carrier scheduled to be homeported in San Diego. The work entails operating a nuclear repair facility currently under construction as well as performing depot-level nuclear propulsion plant work and integrating it with nonpropulsion plant work done by local private contractors and ship personnel. The Navy said that this work would enable the shipyard to maintain the skilled workforce required to support Pacific Fleet aircraft carrier maintenance and that the cyclical nature of the nuclear workload makes it uneconomical to maintain more than a skeletal workforce of skilled Puget Sound shipyard personnel needed for engineering and for quality and radiological controls in San Diego. Under current Navy plans, Puget Sound personnel will use about 112,000 direct labor staff-days for nuclear work on the Stennis. Not all of the Navy’s work would be done at San Diego: planning, some engineering work, and some assembly would be done at Puget Sound. San Diego private shipyards would use about 53,000 direct labor staff-days for nonnuclear work. Newport News Shipbuilding, a nuclear repair capable private shipyard in Virginia, expressed interest in doing work in San Diego by submitting to the Navy an unsolicited proposal to integrate the nonnuclear propulsion plant work into the nuclear propulsion plant work schedule. The Navy’s plan for accomplishing the nuclear aircraft carrier work in San Diego has not been finalized. On March 13, 1997, the Under Secretary of Defense for Acquisition and Technology signed a memorandum that requires the Navy to develop a clear statement of work for use in a competition between Puget Sound Naval Shipyard and qualified private sector sources for the nuclear aircraft carrier work planned to be performed in San Diego. The statement is to be forwarded to the Deputy Under Secretary of Defense for Logistics by October 1998. If all the nuclear work is turned over to the private sector, the amount of TDY assignments would be substantially reduced, and excess capacity at the naval shipyards would increase unless personnel adjustments were made to reflect the workload reductions. TDY assignments could also be affected by (1) potential future reductions in the number of Navy ships, (2) the regionalization of Navy ship maintenance, and (3) another round of base closures. The Navy expects to reduce its fleet of ships from 354 in 1997 to 304 by 2006. Most recently, the Navy implemented a recommendation of the Quadrennial Defense Review that called for the inactivation of 2 nuclear attack submarines and 15 surface ships. As a consequence, the naval shipyards’ planned workload will be reduced by about 825,000 direct labor staff-days. This reduction could affect TDY assignments, depending on how the Navy reallocates its remaining shipyard workload. The Navy is streamlining and consolidating its maintenance functions in areas of fleet concentrations as part of its Regional Maintenance Program. Under this program, the Navy plans to ultimately integrate its intermediate- and depot-level maintenance and establish regional maintenance centers. A prototype center is under development in Pearl Harbor, Hawaii. According to Navy officials, the establishment of such centers will provide the Navy with greater flexibility for using excess ship repair personnel, without incurring TDY assignments. For example, as part of its Pearl Harbor Pilot Demonstration Project, the Navy is integrating its intermediate maintenance facility and the nearby naval shipyard. Personnel will be used interchangeably, provided they have the necessary skills. The Navy closed four naval shipyards and four major homeports during four rounds of base closures, which concluded in 1995. The Secretary of Defense requested an additional two rounds of base closures. If approved, TDY assignments could increase or decrease, depending on which (if any) homeports and shipyards would close. The Navy’s rationale for using temporarily excess naval shipyard personnel is generally sound from a cost and operational standpoint. However, in cases where shipyard personnel are sent on temporary duty because no local repair capabilities exist, there may be cost-effective private sector alternatives. Changes in naval shipyard personnel levels, workloads, and homeport locations could affect the use of TDY assignments. The planned nuclear attack submarine refuelings and the homeporting of up to three nuclear aircraft carriers in San Diego would likely have the most impact on TDY assignments in the near future. To ensure that Navy resources are used in the most cost-effective manner, we recommend that the Secretary of Defense direct the Navy to consider using the private sector for workloads that are performed routinely by naval shipyard personnel on temporary duty. Further, when reductions in future workloads are significant, we recommend that the Navy determine the extent to which it could reduce its shipyard capacity and associated personnel. In making these determinations, the Navy needs to ensure that all applicable statutory requirements are met. The Department of Defense (DOD) provided written comments on the draft of this report, which are presented in appendix III. DOD concurred with both of our recommendations. DOD also suggested several minor technical and editorial changes, which we have made, as appropriate. We conducted our review between July 1997 and February 1998 in accordance with generally accepted government auditing standards. The scope and methodology for our review are discussed in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Subcommittee on Defense, Senate Committee on Appropriations, and the Subcommittee on National Security, House Committee on Appropriations. We are also sending copies of the report to the Secretaries of Defense and the Navy; the Chief of Naval Operations; and the Director, Office of Management and Budget. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8412 or my Assistant Director, George A. Jahnigen, on (202) 512-8434. Major contributors to this report are listed in appendix IV. As of September 1997, the Navy had ships homeported in 23 locations. The number and type of ships in the homeports range from 76 ships, including aircraft carriers, surface ships, and submarines, to one surface ship. These factors can influence the number of TDY assignments at each homeport. Table I.1 shows the Navy’s homeports and the number and type of ships located at each one. Norfolk, Va. Little Creek, Norfolk, Va. Bath, Me. Earle, N.J. Groton, Conn. Portsmouth, N.H. Charleston, S.C. Mayport, Fla. Pascagoula, Miss. Ingleside, Tex. Kings Bay, Ga. San Diego, Calif. Bremerton, Wash. Everett, Wash. Bangor, Wash. Concord, Calif. Newport News, Va. As required by section 366 of the National Defense Authorization Act for Fiscal Year 1998, we reviewed the Department of the Navy’s practice of using temporary duty (TDY) assignments of naval shipyard personnel to perform ship maintenance and repair work at homeports not having naval shipyards. Specifically, the act required us to review (1) the rationale supporting the Navy’s practice, (2) the cost-effectiveness of these assignments, and (3) the factors affecting future requirements for the practice. To determine the Navy’s rationale for using TDY assignments of naval shipyard personnel, we interviewed officials and obtained pertinent studies, briefings, and other documents from the offices of the Deputy Under Secretary of Defense for Acquisition and Technology; the Deputy Chief of Naval Operations for Logistics; the Assistant Secretary of the Navy for Research, Development, and Acquisition; and the Naval Sea Systems Command. We also interviewed Atlantic and Pacific Fleet maintenance officials, visited the four naval shipyards, and interviewed shipyard officials to determine their views on TDY assignments and to obtain data on the extent of TDY assignments for fiscal years 1995-97. To determine what methodology would be appropriate to measure the cost-effectiveness of TDY assignments, we interviewed Navy officials and defense consulting officials from the Center for Naval Analysis, the Logistics Management Institute, and the Institute for Defense Analysis. We obtained their opinions on the appropriate methodology to use when the naval shipyards do or do not have adequate time to adjust their personnel levels to match workload changes. We then compared this methodology to the one we had previously used in our 1987 report on the cost-effectiveness of naval shipyards’ borrowing labor from one another to meet assigned workloads. We found them to be essentially the same. We then used the methodology to determine the cost-effectiveness of using TDY assignments for ship repairs. We assumed the average direct labor costs as fixed when a naval shipyard did not have adequate time to adjust its personnel to workload reductions. Consequently, we compared only the average additional cost to the government of travel-related expenses to the average private shipyard staff-day rate for performing ship repairs. If the shipyard’s travel-related costs were less than the private shipyard staff-day rate, we considered the use of TDY assignments to be cost-effective. However, when the shipyards had sufficient time to make needed personnel adjustments, we added the average naval shipyard direct labor costs to the average travel-related costs and compared this total amount to the average private shipyard staff-day rate. If a naval shipyard’s total costs were more than the private shipyard’s staff-day rate, we considered the use of TDY assignments not to be cost-effective. To obtain information on the cost components, we reviewed information generated by the shipyards’ management information and financial systems. To determine the average staff-day rate for private shipyards, we contacted the Office of the Supervisor for Shipbuilding at the Naval Sea Systems Command, which negotiates and administers ship repair contracts with the private sector. We found that the naval shipyards’ systems did not specifically identify or summarize the amount of direct labor staff-days spent on TDY assignments, nor did they identify the reasons for the TDY assignments. We, therefore, developed a data collection instrument that would gather the desired information, using the best available shipyard data and estimates. While the data was not precise or verifiable, it represented the best information on the staff-days expended by naval shipyards on TDY and travel, per diem, and other related travel costs resulting from TDY assignments. We used the estimated data primarily to show the relative magnitude of TDY use. To determine the factors affecting the future use of TDY assignments, we interviewed and obtained documents and other pertinent data from officials of the offices of the Deputy Under Secretary of Defense for Acquisition and Technology, the Deputy Chief of Naval Operations for Logistics, and the Naval Sea Systems Command. We also interviewed officials from the four naval shipyards, the consulting firms previously mentioned, and private shipyards in the San Diego area. The results of our review are based on the assumption that the current naval shipyard infrastructure would remain in place. Dennis A. De Hart, Evaluator-in-Charge Samuel S. Van Wagner, Senior Evaluator Jean M. Orland, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO reviewed the Navy's practice of using temporary duty assignments of naval personnel to perform ship maintenance and repair work at homeports without nearby naval shipyard capability, focusing on the: (1) rationale supporting the Navy's practice; (2) cost-effectiveness of these assignments; and (3) factors affecting future requirements for the practice. GAO noted that: (1) the Navy's rationale for temporary duty assignments is twofold; (2) such assignments are required to perform work at locations where no local public or private shipyards have the required depot-level maintenance capability; (3) most temporary duty assignments are for this reason; (4) the Navy performs work at such locations to comply with its policy to perform ship repairs of six months or less at the ship's homeport and when it is not practical to bring ships to the shipyard; (5) the Navy believes that using temporarily excess naval shipyard workers on temporary duty assignment is cost-effective, even when there is local private-sector capability because these workers will be needed in the future to perform ship repair work; (6) the Navy's rationale for sending temporarily excess naval shipyard personnel on temporary duty assignments appears reasonable from a cost and operational standpoint; (7) however, in some cases, other approaches may be more cost-effective; (8) the Navy is currently retaining some temporarily excess shipyard personnel to ensure that it can handle the planned refuelings of nuclear attack submarines for fiscal year (FY) 1999 and beyond; (9) retaining the personnel for these purposes appears reasonable, since the Navy has a need for the personnel; (10) it is following the same practice to perform nuclear ship repair work at San Diego because local private shipyards do not have nuclear capability; (11) however, other approaches, such as making greater use of the private sector, may warrant consideration; (12) possible changes to future ship repair workloads could affect the requirement for future temporary duty assignments and retention of current naval shipyard personnel levels; (13) for example, the Navy has cancelled 17 planned nuclear attack submarine refuelings since FY 1993; (14) further reductions in the number of planned refuelings would substantially decrease the on-site workloads planned for three naval shipyards, especially Portsmouth; (15) a proposal to homeport three nuclear aircraft carriers in San Diego, California, which does not have a local naval shipyard, could substantially increase temporary duty assignments; and (16) other factors that could affect the amount of future temporary duty assignments include: (a) further reductions in the number of Navy ships; (b) full implementation of the Navy's Regional Maintenance Program; and (c) a new round of base closures.
|
The FSM and the RMI are located in the Pacific Ocean just north of the equator, about 3,000 miles southwest of Hawaii and about 2,500 miles southeast of Japan. The FSM is a federation of four states and has a population of approximately 103,000 (as of 2010) scattered over many small islands and atolls. The RMI comprises 29 constituent atolls and five islands with a population of approximately 53,000 as of 2011. U.S. relations with the FSM and the RMI began during World War II when the United States ended Japanese occupation of the region. Beginning in 1947, the United States administered the region under a United Nations trusteeship. The four states of the FSM voted in a 1978 referendum to become an independent nation, while the RMI established its constitutional government and declared itself a republic in 1979. Under the trusteeship agreement, both newly formed nations remained subject to the authority of the United States until 1986. The United States, the FSM, and the RMI entered into the original Compact of Free Association in 1986, and from 1987 through 2003 the FSM and the RMI are estimated to have received about $2.1 billion in compact financial assistance. In 2003, the United States approved separate amended compacts with the FSM and the RMI that went into effect on June 25, 2004, and May 1, 2004, respectively. The amended compacts provide for direct financial assistance to the FSM and the RMI from 2004 to 2023, decreasing in most years. The amounts of the annual decrements are to be deposited in trust funds established for the FSM and the RMI; the annual decrement in grant funding is intended to steadily increase the trust funds so that earnings from the trust can provide a source of annual revenue after the GAO-13-675). In grants end in 2023 (see fig. 2 on page 8 of the report, addition to receiving compact sector grants, the FSM and the RMI are eligible for a supplemental education grant each year. Separate from the funding authorized and appropriated under the amended compacts’ enabling legislation, the countries also receive other grants and other assistance from U.S. agencies. The legislation and fiscal procedures agreements for the amended compacts established oversight mechanisms and responsibilities for the FSM, RMI, and the United States. To strengthen the management and accountability and promote the effective use of compact funding, JEMCO and JEMFAC were jointly established by the United States and, respectively, the FSM and the RMI. Each five-member committee comprises three representatives from the United States government and two representatives from the corresponding country. JEMCO’s and JEMFAC’s designated roles and responsibilities include the following: reviewing the budget and development plans from each of the governments; approving grant allocations and performance objectives; attaching terms and conditions to any or all annual grant awards to improve program performance and fiscal accountability; evaluating progress, management problems, and any shifts in priorities in each sector; and reviewing audits called for in the compacts. The three countries are required to provide the necessary staff support to their representatives on JEMCO and JEMFAC to enable the parties to monitor closely the use of assistance under the compacts. Each country has established an agency dedicated to providing compact oversight and ensuring compliance with regulations in the amended compacts, grant award terms and conditions, and JEMCO and JEMFAC resolutions. Interior’s Office of Insular Affairs (OIA) has responsibility for the administration and oversight of the FSM and RMI compact sector and supplemental education grants. The Director of OIA serves as Chairman of both JEMCO and JEMFAC. The FSM and the RMI must adhere to specific fiscal control and accounting procedures and are required to submit annual audit reports, within the meaning of the Single Audit Act, as amended. Single audits are a key control for the oversight and monitoring of the FSM and RMI governments’ use of U.S. awards. As the U.S. agency with the largest grant awards to the FSM and the RMI, Interior is designated as the cognizant audit agency for FSM and RMI single audits. All U.S. agencies providing noncompact grants to the FSM and the RMI are responsible for administering those grants in accordance with Office of and agency regulations Management and Budget (OMB) requirementsthat include the Grants Management Common Rule. Under the common rule, U.S. agencies may consider a grantee as “high risk” if the grantee has a history of unsatisfactory performance, is not financially stable, has a management system that does not meet required standards, has not conformed to the terms and conditions of previous awards, or is otherwise irresponsible. In fiscal years 2007 through 2011, the FSM spent about two-thirds and the RMI spent about half of their total compact sector funds in the education and health sectors—$158 million for the FSM and $89 million for the RMI. (For a breakdown of sector compact expenditures and supplemental education grant expenditures for both countries during this period, see pages 14 to 21 of the report, GAO-13-675). In the FSM in fiscal year 2011, education sector compact and supplemental education grant funds together amounted to about 85 percent of total education expenditures, and health sector compact funds were about 66 percent of total health expenditures. Compact funds in the RMI also supported a significant portion of government expenditures in the education and health sectors. Education sector compact funds, supplemental education grants, and Ebeye special needs education funds constituted about 62 percent of the RMI’s total education expenditures in fiscal year 2011, while health sector compact funds and Ebeye special needs health funds accounted for about 33 percent of the RMI’s total health expenditures. We only reported specific expenditures for the FSM for fiscal years 2009 through 2011 because specific expenditure data for the FSM National Government and Chuuk were not presented in their single audits for fiscal years 2007 and 2008. the health sector compact funds for personnel. Concerned about the sustainability of sector budgets as compact funding declines through fiscal year 2023 due to the annual decrements, JEMCO and JEMFAC passed resolutions in 2011, capping budgetary levels for personnel in the education and health sectors of both countries at fiscal year 2011 levels. JEMCO and JEMFAC actions regarding annual decrement plans. JEMCO and JEMFAC resolutions in fiscal years 2009 and 2010 required the FSM National Government and state governments and the RMI government to complete plans that would address the annual decrements in compact funding and identify new revenue sources to replace compact grant assistance in 2023. By the March 2013 JEMCO and JEMFAC midyear meetings, the four FSM states had completed plans to address the annual decrements in compact sector funding through 2023; however, the FSM National Government and RMI government had not completed their plans. Also, in fiscal year 2013, U.S. members of the JEMCO and JEMFAC announced that the United States would consider withholding certain fiscal year 2014 compact sector grant funds until the FSM National Government and RMI submitted their plans for addressing the annual decrements. Without such plans, the countries may not be able to sustain essential services in the education and health sectors. At the annual JEMCO and JEMFAC meetings in August 2013, the committees withheld annual sector funds from the FSM National Government and RMI government because they failed to provide the required plans to address the annual decrements. In September 2013, however, JEMCO allocated sector grant funds to the four FSM states, which provided the required plans, but continued withholding funds from the FSM National Government because it failed to meet the requirements of the JEMCO resolution requiring the plan. In October 2013, JEMFAC provided sector grant funds to the RMI with the stipulation that no sector funds would be approved in fiscal year 2015 unless the RMI fulfilled the terms of the JEMFAC resolution requiring a decrement plan. In November 2013, the FSM National Government provided OIA with a plan detailing the National Government’s long-term fiscal framework, which includes a burden-sharing commitment to the four FSM states to help address the decrement. Among the actions the National Government has taken to address the decrement is a new law modifying the annual compact distribution formula, reducing its share of compact sector grants from 10 percent to 5 percent; the goal in providing an additional 5 percent in compact grant funds to the states is to ensure that priority education and health service needs are not compromised as the annual compact allocations decrease. The U.S. members of JEMCO have yet to determine whether the long-term fiscal framework plan from the FSM National Government meets the decrement plan requirements. Reported FSM and RMI infrastructure spending. In the FSM, in fiscal years 2004 through 2013, approximately $229 million in compact funds were allocated to infrastructure, and of that about $106 million has been expended, according to OIA. Delays in establishing JEMCO-approved priorities and unresolved land titling issues affected the construction and maintenance of some health and education facilities in the FSM. During fiscal years 2007 through 2012, the FSM completed 6 education-related projects on a JEMCO-approved list of 19 priority projects, and other projects are under way. From 2004 through 2013, approximately $106 million in compact funds were allocated to infrastructure in the RMI, and the RMI expended about $95 million dollars on infrastructure projects, including infrastructure maintenance, according to OIA. The RMI stated it has constructed or renovated over 200 classroom facilities in the education sector and 45 projects in the health sector and has also conducted essential maintenance at its two hospitals. Data reliability issues hindered our assessment of progress by the FSM and RMI in both the education and health sectors for fiscal years 2007 through 2011. Although both countries tracked annual indicators in these sectors to measure progress during this period, we encountered data reliability issues in the subsets of indicators we examined. We determined that eight of the subset of nine FSM education indicators we reviewed could not be used to assess progress over time because of such issues as incomplete data and inconsistent definitions and data collection. For example, we found that the four FSM states did not use common definitions for some indicators; consequently, the education indicator reports we reviewed did not contain consistent data for these indicators and comparisons could not be made across states. In the RMI, we determined that data for three of the subset of five education indicators we reviewed could not be used to assess education sector progress for the compact as a whole because of issues such as lack of data from the country’s outer islands, inconsistencies in reported data for some years, and revisions to data with no explanation. For all five of the subset of FSM health indicators we reviewed, we determined that the data were not sufficiently reliable to assess progress for the compacts as a whole. For example, for the indicator that all essential drugs were to be available 80 percent of days, we identified problems with the source documents used in the calculations in Chuuk and Pohnpei, calling into question the reliability of the data presented in the health indicators report. In the RMI, of the subset of five health indicators we reviewed, we determined one was sufficiently reliable and two were not sufficiently reliable to assess progress because of various issues with data collection and reporting. For example, we determined that data for immunization coverage for 2-year-olds and the child mortality rate were not sufficiently reliable due to the timeliness and accuracy of reporting and low coverage rates for data from the outer islands. For the remaining two FSM health indicators we examined, we had no basis to judge the reliability of the data. In much of their reporting on their education and health indicators, the FSM and RMI have noted data reliability problems and some actions they have taken to address them. JEMCO and JEMFAC have also raised concerns about the reliability of the FSM’s education and health data and the RMI’s health data and required that each country obtain an independent assessment and verification of these data; neither country has met that requirement. Without reliable data, the countries cannot assess progress toward their goals in the education and health sectors and cannot effectively use results data for setting priorities and allocating resources aimed at improving performance. The lack of reliable data also hampers the ability of JEMCO and JEMFAC to oversee compact expenditures and assess the countries’ progress toward all their goals in the education and health sectors. The single audit reports we reviewed indicated challenges to ensuring accountability of compact and noncompact U.S. funds in the FSM and RMI. In the FSM, although the single audit reports for Chuuk and Pohnpei state governments demonstrated improvement in financial accountability, the FSM National Government single audit reports indicated that the government faced financial accountability challenges. For example, the FSM National Government’s 2011 single audit report contained several repeat findings—problems noted in previous audits that had not been corrected for several years—and identified problems with the extent of noncompliance with program requirements, such as preparing required quarterly reports. RMI single audit reports for fiscal years 2006 through 2011 demonstrated an increase in material weaknesses in noncompliance with the requirements of federal programs. Some findings were related to compact grants and others to noncompact funding. Furthermore, several of the weaknesses were not corrected over several years. To improve financial accountability, OIA led actions that resulted in the creation of the Chuuk Financial Control Commission, but OIA has not coordinated with other U.S. agencies regarding the risk status of the FSM and the RMI for noncompact funds. Although OIA has a lead role regarding audit matters, it has not formally coordinated with other U.S. agencies to address audit findings, nor has it assessed whether its own noncompact grants should be classified as high risk. Moreover, other federal agencies whose grants may be at risk have not routinely considered designating either country as a high-risk grantee. Such consideration could enable U.S. agencies to enforce conditions and restrictions on noncompact grant funds they provide, thus improving the oversight and management of the funds. We also found that the offices responsible for compact administration in the FSM, RMI, and United States faced limitations hindering their ability to conduct compact oversight. FSM officials told us that they need additional staff to be able to conduct more oversight activities and also noted that the Division of Compact Management is hampered by its lack of authority to ensure that the National Government and the four states comply with compact requirements. According to RMI officials, staff constraints in the Office of Compact Implementation limit its ability to conduct oversight and enforce compact requirements across multiple sectors and operations in numerous atolls. Additionally, this office told us they are hampered by their lack of authority to require that the RMI ministries implementing projects funded by sector grants comply with compact requirements. Finally, we found that OIA experienced a staffing shortage that disproportionately affected compact grant oversight compared to other OIA activities, with 6 of 11 planned positions unfilled in 2012 and 5 of 11 unfilled in 2013 (for details, see pages 51-53 of the report, GAO-13-675). Although the majority of grants administered by OIA are amended compact grants, OIA’s amended compact oversight function was disproportionally affected by staffing shortages, which affected its ability to ensure compact funds were efficiently and effectively used. In our September 2013 report, we directed five recommendations to Interior to improve oversight and financial accountability of U.S. compact and noncompact funds allocated to the FSM and RMI. Improving oversight through JEMCO and JEMFAC. We recommended that Interior take all necessary steps to improve the ability of JEMCO and JEMFAC to ensure that the FSM and RMI (1) complete satisfactory plans to address annual decrements in compact funds, (2) produce reliable indicator data used to track progress in education and health, and (3) address all single audit findings in a timely manner. Our recommendations suggested the Director of Insular Affairs could accomplish the accountability improvements by coordinating with other member U.S. agencies of both oversight committees to have the committees take all necessary steps, or by taking all necessary steps directly, acting in his capacity as administrator of compact grants. In its written response to a draft of our report, Interior noted examples of how it and other U.S. members of JEMCO and JEMFAC have worked to make improvements in the three areas mentioned in the recommendations. At both their annual meetings in August 2013, JEMCO and JEMFAC passed resolutions in response to the recommendations in our draft report related to decrement planning, data reliability, and addressing single audit findings. Consulting with other agencies about possible high-risk designation. In order to improve financial accountability of noncompact U.S. grant assistance provided to the FSM and the RMI, we recommended that Interior consult with other grantor agencies to determine whether the FSM National Government, any FSM state government, or the RMI government meets the criteria to be designated as a high-risk grant recipient for noncompact funds, or whether other steps should be taken to improve accountability. In its written response to a draft of our report, Interior noted that it cannot direct other agencies to take action with regard to any grant-specific issues and stated it was unaware of any precedent for federal agencies to jointly designate a grantee as high risk; however, Interior said it would discuss this approach with other federal agencies. Correcting the staffing shortage related to compacts oversight. To ensure that Interior is providing appropriate resources for oversight and monitoring of the FSM and RMI compacts, we recommended that the Secretary of the Interior take actions to correct the disproportionate staffing shortage related to compact grant implementation and oversight. Interior concurred with this recommendation, as it did with the others. However, Interior’s written response to this recommendation indicated that it considers corrective action to be contingent on its receiving funding for new positions through the annual budget process. The intent of our recommendation is to have Interior work within its actual funding levels, whatever they may be, to correct what we observed to be a misalignment in how it allocates its staff. FSM and RMI responses. In its written comments on our draft report, the FSM National Government agreed on the importance of the three issues that were the focus of our JEMCO-related recommendation to Interior. The FSM identified activities under way to plan for the decrement and cited implementation of a contract to assess the national education system’s ability to produce valid and reliable data, as well as efforts to review the quality of health indicators with government staff. The FSM remarked on our report’s discussion of the possibility of achieving increased accountability over noncompact grant funds through a high-risk designation, noting that it was assured because the process involved in a high-risk designation is not an arbitrary one. In its written comments on our draft report, the RMI government stated its belief that it had submitted adequate plans to JEMFAC regarding the medium-term budget and investment framework and the decrement. The RMI generally agreed with our findings of data reliability problems in both the education and health sectors and cited challenges in data collection in both sectors, noting that its Ministry of Health was seeking external assistance to improve data quality. With regard to our recommendation that Interior should consult with other agencies to determine whether the RMI meets the criteria to be designated as a high-risk grant recipient for noncompact funds, or whether other steps should be taken to improve accountability, the RMI stated that internal controls are now in place to detect and deter fraud, waste, and noncompliance with the fiscal procedures agreement or other U.S. federal regulations. For that reason, the RMI Ministry of Finance does not believe that any special conditions or restrictions for unsatisfactory performance or failure to comply with grant terms are warranted. We addressed several of the comments in the RMI’s letter by adding or updating information in the report, or to note areas of RMI concern. In reprinting the letter in the report, we also provided specific responses to a number of the comments. Chairman Fleming, Ranking Member Sablan, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Emil Friberg (Assistant Director), Ashley Alley, Christina Bruff, David Dayton, Martin De Alteriis, Julie Hirshen, Jeffrey Isaacs, and Kathleen Monahan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In 2003, the U.S. government approved amended Compacts of Free Association with the FSM and the RMI, providing for a total of $3.6 billion in assistance over 20 years. This testimony draws from GAO's September 2013 report on the use and accountability of these funds to review (1) the FSM's and RMI's use of compact funds in the education and health sectors; (2) the extent to which the FSM and RMI have made progress toward stated goals in education and health; and (3) the extent to which oversight activities by the FSM, RMI, and U.S. governments ensure accountability for compact funding. Like the report, this testimony also provides information on infrastructure spending in the education and health sectors. GAO reviewed relevant documents and data, including single audit reports; interviewed officials from Interior, other U.S. agencies, and the FSM and RMI; assessed data reliability for subsets of both countries' education and health indicators; and visited compact-funded education and health facilities in both countries. In fiscal years 2007 through 2011, the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI) spent at least half their compact sector funds in the education and health sectors. Because both countries spent significant amounts of compact funds on personnel in those sectors, the U.S.-FSM and U.S.-RMI joint management and accountability committees capped budgets for personnel in those sectors at fiscal year 2011 levels due to concerns about the sustainability of sector budgets as compact funding continues to decline through fiscal year 2023. As required by the committees, the FSM states completed plans to address annual decreases in compact funding; however, as of August 2013, the FSM National Government and the RMI had not submitted plans to address the annual decreases. Without such plans, the countries may not be able to sustain essential services in the education and health sectors. Data reliability issues hindered GAO's assessment of each country's progress in the education and health sectors for fiscal years 2007 through 2011. Although both countries tracked annual indicators in these sectors during this period, GAO determined that many of these data were not sufficiently reliable for the purpose of measuring progress for the compacts as a whole over the time frame. In education, GAO found 3 of 14 indicators in the subsets of indicators it reviewed for both countries to be sufficiently reliable. GAO found a variety of education data reliability problems; for example, the four FSM states did not use common definitions for some indicators, resulting in inconsistent data for those indicators, and in the RMI some indicators lacked data from the outer islands. In the health sector, GAO determined that data for all 5 of the subset of indicators it reviewed in the FSM were not sufficiently reliable, and in the RMI, 1 health indicator was sufficiently reliable, 2 were not sufficiently reliable, and for 2 others, GAO had no basis to judge due to insufficient information. The joint management and accountability committees have raised concerns about the reliability of FSM's education and health data and RMI's health data and required each country to obtain an independent assessment and verification of these data; both countries have yet to meet that requirement. Without reliable data, the countries cannot assess progress toward their goals in the education and health sectors. The single audit reports GAO reviewed indicated challenges to ensuring accountability of U.S. funds in the FSM and RMI. For example, the governments' single audits showed repeat findings and persistent problems in noncompliance with U.S. program requirements, such as accounting for equipment. The Department of the Interior (Interior) has taken steps regarding accountability of compact funds, such as establishing a financial control commission in one FSM state, but Interior has not coordinated with other U.S. agencies about the risk status of the FSM and RMI and whether to designate either country as a high-risk grantee. Furthermore, the FSM, RMI, and U.S. offices responsible for compact administration faced limitations hindering their ability to conduct compact oversight. For example, Interior's Office of Insular Affairs (OIA) experienced a staffing shortage that disproportionately affected compact grant oversight compared to other OIA activities, leaving 6 of 11 planned positions for compact oversight unfilled as of 2012 and 5 of 11 still unfilled as of 2013. GAO is not making new recommendations. In its September 2013 report, GAO recommended that, among other actions, Interior should (1) take all necessary steps to ensure the reliability of FSM and RMI indicators in education and health, (2) assess whether to designate each country as high risk, and (3) take actions to correct its disproportionate staffing shortage related to compact grant implementation and oversight. Interior generally agreed with the recommendations and identified actions taken, ongoing, and planned.
|
The IDES process begins at a military treatment facility when a physician identifies one or more conditions that may interfere with a servicemember’s ability to perform his or her duties. The process involves four main phases: the Medical Evaluation Board (MEB), the Physical Evaluation Board (PEB), transition out of military service (transition), and VA benefits. MEB phase: In this phase, medical examinations are conducted and decisions are made by the MEB regarding a servicemember’s ability to continue to serve in the military. This phase involves four stages: (1) the servicemember is counseled by a DOD board liaison on what to expect during the IDES process; (2) the servicemember is counseled by a VA case manager on what to expect during the IDES process and medical exams are scheduled; (3) medical exams are conducted according to VA standards for exams for disability compensation by VA, DOD, or contractor physicians, and (4) exam results are used by the MEB to identify conditions that limit the servicemember’s ability to serve in the military.an impartial medical review by a physician not on the MEB, or both. Also during this stage, a servicemember can seek a rebuttal, or PEB phase: In this subsequent phase, decisions are made about the servicemember’s fitness for duty, disability rating and DOD and VA disability benefits, and the servicemember has opportunities to appeal those decisions. This includes: (1) the informal PEB stage, an administrative review of the case file by the relevant military branch’s PEB without the presence of the servicemember; (2) VA rating stage, where a VA rating specialist prepares a rating that covers the conditions that DOD determined made a servicemember unfit for duty and any other conditions claimed by the servicemember to VA. This rating is prepared for use by both agencies in determining disability benefits. In addition, servicemembers have several opportunities to appeal different aspects of their disability evaluations: a servicemember dissatisfied with the decision on whether he or she is fit for duty may request a hearing with a “formal” PEB; a servicemember who disagrees with the formal PEB fitness decision can, under certain conditions, appeal to the reviewing authority of the PEB; and a servicemember can ask for VA to reconsider its rating, but only for conditions found unfitting by the PEB. Transition phase: If the servicemember is found unfit to serve, he or she enters the transition phase and begins the process of separating from the military. During this time, the servicemember may take accrued leave. Also, DOD board liaisons and VA case managers provide counseling on available benefits and services, such as job assistance. VA benefits phase: A servicemember found unfit and separated from service becomes a veteran and enters the VA benefits phase. VA finalizes its disability rating after receiving evidence of the servicemember’s separation from military service. VA then starts to award monthly disability compensation to the veteran. DOD and VA established timeliness goals for the IDES process to provide VA benefits to active duty servicemembers within 295 days of being referred into the process, and to reserve component members within 305 days (see fig. 1). DOD and VA also established interim timeliness goals for each phase and stage of the IDES process. The overall timeframes are intended to represent an improvement over the legacy disability evaluation system, which was estimated to take 540 days to complete. In addition to timeliness, the agencies also established a performance goal of having 80 percent of servicemembers satisfied with the IDES process. DOD measures satisfaction through surveys conducted after the completion of the MEB, PEB, and transition phases. Each survey consists of approximately 30 questions, including 4 questions that ask about the servicemember’s satisfaction with the overall IDES process up to that point. Reported satisfaction rates for each phase are based on an average of responses to these four questions, and reported overall satisfaction with IDES (which is used to track the percent satisfied under the performance goal) is an average of satisfaction rates for the three phases. From the original 3 pilot military treatment facilities in the Washington, the IDES has expanded to 139 military treatment facilities in D.C., area,the U.S. and several other countries. DOD and VA first added 24 military treatment facilities to the pilot in fiscal years 2009 and 2010, bringing the pilot total to 27. In 2010, DOD and VA leadership decided to implement the IDES world-wide, and did so in 4 stages between October 2010 and September 2011, adding 112 military treatment facilities. As IDES expanded, the number of new cases enrolled in IDES has also increased, totaling 18,651 in fiscal year 2011 (see fig. 2). IDES caseloads vary by service, but the Army manages the bulk of IDES cases. Of new cases referred to IDES in fiscal year 2011, about 64 percent were in the Army, and much of the growth in caseload has been in the Army. Additionally, active duty servicemembers make up the majority of IDES cases, with about 88 percent of new cases in fiscal year 2011 involving this group (see fig. 3). IDES timeliness has worsened since the inception of the program. Since fiscal year 2008, the average number of days for servicemember cases to be processed and receive benefits increased from 283 to 394 for active duty cases (compared to the goal of 295 days) and from 297 to 420, for reserve component cases (compared to the goal of 305 days) (see fig. 4). Along with increasing average processing times, the percent of IDES cases awarded benefits within timeliness goals has steadily declined. DOD’s and VA’s current goal is to complete 60 percent of IDES cases on time. In fiscal year 2008, an average of 63 percent of cases for active duty servicemembers and 65 percent for reservists completed the process and received benefits within the timeliness goals; by fiscal year 2011 this was down to 19 and 18 percent respectively (see fig. 5). These trends also hold when considering all cases that completed the IDES process regardless of outcome, although overall processing times were shorter. (See app. III for more information on case processing times regardless of outcome.) When examining timeliness across the four phases that make up IDES, data show that average processing time regularly fell short of goals for three—MEB, Transition, and VA Benefits. For example, for cases that completed the MEB phase in fiscal year 2011, active duty and reserve component members’ cases took an average of 181 and 188 days respectively to be processed, compared to goals of 100 and 140 days. For the PEB phase, processing times increased over time, but were still within the established goal of 120 days. Along with increasing average processing times, the percentage of cases meeting goals for most phases has generally declined (see fig. 6). In particular, the MEB and Transition phases have lower percentages of cases meeting goals than the other phases in most years, especially for active duty cases. As noted above, the MEB phase was a key contributor to increases in overall processing times between 2008 and 2011 for both active duty servicemembers and reservists for cases that have completed the IDES process regardless of outcome (table 1). To obtain a better understanding of more recent timeliness trends within the MEB phase, GAO analyzed MEB timeliness of all cases—all fiscal years combined—that completed the MEB process by sorting them into two groups: (1) those that completed the entire IDES process, and (2) those that had not yet completed IDES but completed the MEB phase. As shown in figure 7, for the group that completed IDES, 30 percent of active duty servicemembers and 18 percent of reservists missed the goal by more than 90 days. For those still in IDES, representing more recent data, the picture is slightly better for active duty servicemembers with 37 percent of cases meeting the MEB goal and 25 percent missing the goal by more than 90 days. However, the percentage of reserve component members who missed the goal by more than 90 days increased from 18 to 28 percent. For those servicemembers who were still enrolled in the MEB phase as of December 2011, the data show that 41 percent of active duty and 33 percent of reserve component servicemember cases had already missed the goal processing times (see fig. 8). Of these, 15 percent of active duty and 10 percent of reservist component servicemember cases missed the goal by more than 90 days. Within the MEB phase, significant delays have occurred in completing medical examinations (medical exam stage) and delivering an MEB decision (the MEB stage). For cases completing the MEB phase in fiscal year 2011, 31 percent of active duty and 29 percent of reservist cases met the 45-day goal for the medical exam stage and 20 percent of active duty and 17 percent of reservist cases met the 35-day goal for the MEB stage. Officials at some sites we visited told us that MEB phase goals were difficult to meet and not realistic given current resources. For example: Some military officials noted that they did not have sufficient numbers of doctors to write the narrative summaries of exam results needed to complete the MEB stage in a timely manner. One facility noted that while they have 7 doctors, they would need 11 additional doctors and 10 technician assistants to process cases through the initial medical exam and other additional disability specific examinations in a timely manner. Further, officials at another Army base we visited noted that there was a shortage of doctors and DOD board liaisons and that they had difficulty recruiting such staff due to the remote location of the base. At all the facilities we visited, officials told us DOD board liaisons and VA case managers had large case loads. While DOD has established a goal of 1 board liaison for every 20 servicemembers, the ratios varied widely by military treatment facility with a range from 1:1 as the lowest to the highest of 1:75 according to recent data. Because of high case loads and a reported increase in the complexity of cases, staff at one facility reported a liaison to servicemember ratio of 1:80 and noted that liaisons must often prioritize cases to deal with the most pressing issues first. As a result, cases that might otherwise be quick to process take longer simply because they are waiting to be processed. Liaisons are often working overtime and weekends to keep up with cases. Monthly data produced by DOD subsequent to the data we analyzed show significantly improved timeliness for the medical exam stage (66 percent of active duty cases met the goal in June 2012) and some improvement for the MEB stage (40 percent of active duty cases met the goal in the month of June 2012). However, it is too early to tell whether these improvements will continue going forward. (See app. III for DOD reported monthly data, October 2011 – June 2012.) Since fiscal year 2008, the majority of cases have completed the PEB phase under the goal of 120 days, however, PEB timeliness has still worsened over time. In 2011, 78 percent of active duty and 62 percent of reservist cases that completed the entire IDES process met the PEB goal. The average processing time was 93 days for active duty servicemembers and 116 for reservists (see table 2). Despite meeting the overall PEB goal in fiscal year 2011, established goals were not met for any of the interim PEB stages, including the informal PEB and VA rating stages which are the two stages all servicemembers must complete. For all cases that completed the PEB phase in fiscal year 2011, only 38 percent of active duty and 38 percent of reservists’ cases received an informal PEB decision within the 15 days allotted. Further, only 32 percent of active duty and 27 percent of reservist cases received a preliminary VA rating within the 15-day goal. (see table 3). Regarding delays with the VA rating, VA officials told us that staffing has been a challenge at their IDES rating sites and that this has slowed case processing. Monthly data produced by DOD subsequent to the data we analyzed show similar trends for the informal PEB and VA preliminary rating stages. As of June 2012 (most recent data available), active duty cases showed slight improvements in timeliness for the informal PEB stage (41 percent of cases meeting the goal and processing times averaging 24 days). The VA rating stage, on the other hand, showed slight declines in timeliness (31 percent of cases meeting the established goal and processing times of 35 days) relative to FY 2011 averages for active duty servicemembers. However, as noted before, it is too early to tell the extent to which such trends will continue. (See app. III for DOD reported monthly data, October – June 2012.) Also during this phase, IDES planners allocated the majority of overall PEB processing time (75 out of the 120 days) for appeals—including a formal PEB hearing and a reconsideration of the VA ratings. According to officials, while the three appeal stages do not happen for every case, appeals can significantly increase processing times for any one case. However, only 20 percent of cases completed in fiscal year 2011 actually had any appeals; calling into question DOD and VA’s assumption on the prevalence and average effect of appeals, and potentially masking processing delays in other mandatory parts of the PEB phase. The transition phase has consistently taken longer than its 45-day goal— almost twice as long on average. While processing times improved slightly for cases that completed this phase in fiscal year 2011 (from 79 days in fiscal year 2010 to 76 days in fiscal year 2011 for active duty cases), timeliness has remained consistently problematic since fiscal year 2008 (see table 4). DOD lacks comprehensive data on how servicemembers spend their time in the transition phase, which includes many different activities related to separation from the military. These activities vary widely depending on the case. For example, during this phase servicemembers receive mandatory training such as job training through the Transition Assistance Program and may also receive counseling such as pre-discharge Vocational Rehabilitation and Employment counseling. In addition, servicemembers may be placed on temporary duty while house hunting, or to allow for a servicemember’s children to complete the school year before moving. Servicemembers may also take earned leave time—to which they are entitled—before separating from the service. For example, an Army official said that Army policy allows servicemembers to take up to 90 days of earned leave prior to separating, and that average leave time was about 80 days. Because many of these activities can occur simultaneously or in small intermittent segments of time, DOD officials said it is difficult to track which activities servicemembers participate in or determine how much time each activity takes. DOD is exploring options for better tracking how time is spent in this phase. Because a potentially substantial amount of the time in this phase may be for the personal benefit of servicemembers, DOD recently began reporting time in IDES with and without the transition phase included. Processing time improved somewhat for the benefits phase (48 days in fiscal year 2010 to 38 days in fiscal year 2011), but continued to exceed the 30-day goal for active duty servicemembers (see table 5). Several factors may contribute to delays in this final phase. VA officials told us that cases cannot be closed without the proper discharge forms and that sometimes they do not receive this information in a timely manner from the military services. Additionally, if data are missing from the IDES tracking system (e.g., the servicemember already separated, but this was not recorded in the database), processing time will continue to accrue for cases that remain open in the system. Officials could not provide data on the extent to which these factors had an impact on processing times for pending cases, but said that once errors are detected and addressed, reported processing times are also corrected. In addition to timeliness, DOD and VA evaluate IDES performance using the results of servicemember satisfaction surveys. In principle, all members have an opportunity to complete satisfaction surveys at the end of the MEB, PEB, and transition phases; however, under current survey procedures servicemembers become ineligible to complete a survey for either the PEB or transition phases if they did not complete a survey in an earlier phase. Additionally, servicemembers who start but do not complete a phase are not surveyed. As such, DOD may be missing opportunities to obtain input from servicemembers who did not complete a prior survey or exited IDES in the middle of a phase. Further, response rates may be affected because DOD does not survey servicemembers once they separate from the service and become veterans. While it is not necessary for DOD to survey all servicemembers at the end of every phase, the percentage and characteristics of servicemembers covered by the survey (i.e., who completed a phase and were ultimately interviewed) may be insufficient to establish that the survey results are representative of servicemember satisfaction, especially for later phases. (See table 6 for response and coverage rates.) DOD officials recently told us that they will consider alternative survey eligibility requirements, including working with the Office of Management and Budget for permission to interview veterans. (For additional information regarding the timing of the survey, see app. II). In addition, alternate survey measures show lower satisfaction rates than those reported by DOD. Using DOD’s measure, we found an overall satisfaction rate of about 67 percent since the inception of IDES. DOD defines a servicemember as satisfied if the average of his or her responses across several surveys is above 3 on a 5-point scale, with 3 denoting neither satisfied nor dissatisfied. However, using our alternate measure that defines servicemembers as satisfied only when all of their responses are 4 or above, we calculated the satisfaction rate to be about 24 percent (see fig. 9). Our calculation is a more conservative measure of satisfaction, because it rules out the possibility that a servicemember is deemed “satisfied” even when he or she is dissatisfied on one or more questions in the scale. While not incorrect, DOD’s scale can mask pockets of servicemember dissatisfaction. For example, an individual may indicate that he or she is very dissatisfied with one phase of the program, but satisfied with other phases, and the overall satisfaction score can be the same as one for a servicemember who is generally satisfied across all phases of the process. Measuring satisfaction, or even dissatisfaction, in different ways may provide a more complete picture of satisfaction and how it varies in different circumstances, and thus may reveal areas where DOD could focus on improving management and performance. Finally, using either DOD’s or our calculated measure, we found that overall satisfaction did not vary much according to differences in the experiences of servicemembers. For example, our model estimated that satisfaction varied by no more than approximately five percentage points across branch, component, disenrollment outcome, sex, MEB exam provider, enlisted and officer personnel classes, and the number of claimed and referred conditions. While lack of variation could be a positive outcome signaling consistent treatment, it could equally mean that the survey does not measure opinions in enough detail to discriminate among servicemembers’ experiences. Either way, such results provide little insight into identifying areas for improvement or effective practices. Further, while we found some association between servicemembers satisfaction and the timeliness of their case processing, we also found many servicemembers were highly dissatisfied even when their cases were completed on time, and many were highly satisfied even when their cases were not. For example, 68 percent of those who said that PEB timeliness was “very poor” completed the phase on time, and 55 percent of those who said that MEB timeliness was “very good” did not complete on time. The lack of variation and/or correlation between satisfaction and experiences of servicemembers—coupled with low coverage rates—raise questions about the value of the survey results as a performance measure and program evaluation tool. (See app. II for more information on servicemember satisfaction results.) DOD is reconsidering its options for measuring customer satisfaction, but has yet to select a particular approach. As noted above, possible changes might include widening the criteria for who is eligible for the survey, modifying survey questions, changing when and how the survey is delivered, and changing how satisfaction is calculated. Officials already concluded that the survey, in its current form, is not a useful management tool for determining what changes are needed in IDES and said that it is expensive to administer—costing approximately $4.3 million in total since the start of the IDES pilot. Navy officials told us they believed that the satisfaction surveys could be made more useful if they knew whether servicemember’s satisfaction was actually influenced by the servicemember’s desired or actual outcome of the IDES process. Further, Army officials already determined that the DOD survey is of limited value, and are proceeding with plans to field their own survey in the hopes of obtaining more detailed information at the facility level. Because of fiscal constraints, DOD suspended the survey in December 2011, but officials told us that they hope to resume collecting data in fiscal year 2013. We identified two potential alternatives to assessing servicemember experiences. Surveying a sample of servicemembers: While a census gives each servicemember a chance to describe his or her experiences with IDES, DOD could collect the same data at a lower cost by surveying a probability sample of servicemembers. If appropriately designed and executed, a sample would accurately represent all groups of servicemembers and produce the necessary data for important subgroups, such as facilities or branches. Since the cost of administering a survey is strongly related to the number of people surveyed, probability sampling could also allow DOD to assess servicemember experiences while substantially reducing data collection costs. Exit interviews: In-depth interviews with servicemembers, completed at disenrollment from IDES, could also yield more detailed and actionable information about the program. Although the current survey includes open-ended questions, it is primarily designed to collect standardized, quantitative measures of satisfaction with broad aspects of IDES, such as fairness and the performance of DOD board liaisons and VA case managers. As a result, the survey provides a limited amount of detailed feedback on particular facilities, staff members, and stages of the process that managers might use to improve the servicemember experience, decrease processing times, or reduce cost. In contrast, semi-structured exit interviews would allow servicemembers to provide this type of qualitative, detailed feedback. Interviewing servicemembers at the end of the process would also allow servicemembers to assess their overall experiences with IDES rather than at an earlier stage, without having completed the entire process. Exit interviews could also reach servicemembers who exit IDES without completing the process such as those who are returned to duty. Exit interviews, however, have the potential to be labor intensive and expensive. DOD and VA have undertaken a number of actions to address IDES challenges—many of which we identified in our past work. Some actions—such as increased oversight and staffing—represent important steps in the right direction, but progress is uneven in some areas. Increased monitoring and oversight: We identified the need for agency leadership to provide continuous oversight of IDES in 2008 and the need for system-wide monitoring mechanisms in 2010. Since then, agency leadership has established mechanisms to improve communication, monitoring, and accountability. The secretaries of DOD and VA have met several times since February 2011 to discuss progress in improving IDES timeliness and have tasked their agencies to find ways to streamline the process so that the timeliness goals can be shortened. The secretaries also tasked their agencies to expand the use of expedited disability evaluations for severely combat-wounded servicemembers; and develop a system to electronically transfer case files between DOD and VA locations. Senior Army and Navy officials regularly hold conferences to assess performance and address performance issues, including at specific facilities. With respect to the Army, meetings are led by the Army’s vice-chief of staff and VA’s chief of staff, and include reviews of performance where regional and local facility commanders provide feedback on best practices and challenges. For example, recent Army-VA conferences focused on delays in completion of preliminary ratings for Army PEBs by VA’s Seattle rating site, efforts by the Army to increase MEB staffing, development of Army-wide IDES standardization guidance, and Army-VA electronic records interchange. Periodic meetings are also held between senior Navy medical and VA officials to discuss performance issues at Navy military treatment facilities. VA holds its own biweekly conferences with local staff responsible for VA’s portion of the process. These conferences are supplemented by a bi-weekly IDES “dashboard” that tracks performance data for portions of the IDES for which VA is responsible. According to VA officials, in addition to identifying best practices, these conferences focus on sites with performance problems and identify potential corrective actions. For example, officials said a recent conference addressed delays at Fort Benning, Georgia, and discussed how they could be reduced. VA officials noted that examiner staff were reassigned to this site and worked on weekends to address the problems at this site. In addition, senior VA health care officials hold periodic conferences with officials responsible for exams at IDES sites, to monitor performance. Ensuring sufficient medical exam resources: In our December 2010 report, we noted that VA struggled to provide enough medical examiners (both VA employees and contractors) to meet demand and deliver exam summaries within its 45-day goal. For example, significant deficiencies in examiner staffing (particularly for mental health exams) at Fort Carson contributed to exams for active duty members taking an average of 140 days. To improve exam timeliness, VA hired more examiners and is devoting more resources at those sites where VA clinicians perform IDES exams. In addition, in July 2011, VA awarded a revised compensation and pension (including IDES) contract that provides more flexibility for VA to have contractors perform IDES exams at sites needing additional resources. As a result, VA can use contractors to conduct exams for regional offices beyond the 10 offices for which the contractor normally provides services. Also, VA contracted with 5 companies to provide short- term exam assistance at IDES sites needing it. Further, VA procedures allow reserve component servicemembers in remote locations to receive exams close to their homes. VA exam timeliness has improved and the agency met its 45-day goal for active component members in every month from August 2011 through June 2012. VA officials attributed improved exam timeliness, in part, to additional exam resources provided to IDES sites. (See app. III for additional information on fiscal year 2012 timeliness.) Ensuring sufficient exam summaries: In our December 2010 report, we noted that some cases were delayed because VA medical exam summaries were not complete and clear enough for use in making rating and fitness decisions and needed to be sent back to examiners for additional work. VA officials told us that they have been reinforcing the importance of training and communication between rating staff and medical examiners as ways to improve exam summary sufficiency. For example, VA identified types of information which, if missing from an exam summary, would cause it to be insufficient, and has been training examiners to include such information. Additionally, VA noted that VTA now has the ability to track cases with insufficient exams by allowing staff to annotate information on exam summaries. However, staff are not required to provide this information and rules and procedures for its use have not been established. Ensuring sufficient MEB staffing: In our December 2010 report, we noted that some sites had insufficient MEB physicians, leading to delays in completing the MEB phase. At that time, most of the 27 pilot sites were not meeting the 35-day goal, with average times for active component cases as high as 109 days. Meanwhile, DOD did not have sufficient board liaison staff to handle IDES caseloads. The Army is in the midst of a major hiring initiative intended to more than double staffing for its MEBs over its October 2011 level, which will include additional board liaison and MEB physician positions. The Army reported having 610 full-time equivalent MEB staff positions in October 2011, and planned to hire up to 1,410; this would include 172 MEB physician and 513 board liaison positions. The Army also planned to hire an average of one contact representative per board liaison; these staff members assist the board liaisons with clerical functions, freeing more of the liaisons’ time for counseling servicemembers. As of June 2012, the Army had filled 1,219 (86 percent) of the planned 1,410 positions. Ensuring sufficient VA rating staff: In our December 2010 report, we noted that VA had insufficient staff at one of its rating sites to handle the demand for preliminary ratings, rating reconsiderations, and final VA benefit decisions. VA officials said that the agency has more than tripled the staffing at its IDES rating sites–from 78 to 262 positions. Further, VA has moved staff resources to IDES rating sites from other VA regional offices to provide short-term help in working down rating backlogs. Recent monthly data show an increase in the number of preliminary VA ratings completed, and a slight improvement in processing times. However, as noted before, it is too early to tell the extent to which such trends will continue. (See app. III for additional information on fiscal year 2012 timeliness.) Improving completeness of reserve component members’ records: Service officials noted that incomplete medical records and administrative documentation, especially for reserve component members, often contribute to delays in the early IDES stages, including the VA exam stage. For example, a reserve unit may not have complete medical records for a member who received care from a private provider. When the servicemember enters the IDES, a board liaison is responsible for obtaining the private provider records before handing off the case to VA for exams. To address issues with reserve component servicemembers’ records, the Army established an interim office in Pinellas Park, Florida in January 2011. For reserve component servicemembers who may require IDES referral, this office is tasked with obtaining records from the member’s reserve unit; reviewing them to identify missing information; and, if necessary, requiring the reserve unit to obtain additional records to complete the case file. Staff at this office also determine whether the member needs IDES referral. Army officials indicated that this office is expected to help reduce the backlog of Army reserve component cases in the IDES. However, Army officials noted that they are providing training to reserve units to improve their ability to maintain complete records on their servicemembers and eventually, the Army may discontinue this office if no longer needed. Improving MEB documentation and decisions: In response to delays in completing the MEB stage, the Navy and Army have initiatives underway to help ensure the timely completion of narrative summaries and MEB decisions. For example, the Navy piloted electronic narrative summary preparation at Naval Hospital Camp Lejeune, North Carolina. In May 2012, after determining that the piloted process led to improved MEB completion timeliness, the Navy deployed electronic narrative summary preparation Navy-wide. In March 2011, the Army also deployed an abbreviated MEB narrative summary format, intended to provide better information for MEB and PEB decision making while helping reduce delays in the completion of these summaries by MEB clinicians. Incorporating feedback from its MEBs and PEBs, the Army expects the revised IDES template to reduce redundant information, make summaries simpler and easier to use, and standardize summary preparation across their sites. Resolving diagnostic differences: In our December 2010 report, we identified differences in diagnoses between DOD physicians and VA examiners, especially regarding mental health conditions, as a potential source of delay in IDES. We also noted inconsistencies among services in providing guidance and a lack of a tracking mechanism for determining the extent of diagnostic differences. In response to our recommendation, DOD commissioned a study on the subject. The resulting report confirmed the lack of data on the extent and nature of such differences, and noted that the Army has established guidance more comprehensive than the guidance DOD was developing. It also recommended that DOD or the other services develop similar guidance. A DOD official told us that consistent guidance across the services, similar to the Army’s, was included in DOD’s December 2011 IDES manual. Also, in response to our recommendation, VA took steps to modify the VTA database used to track IDES to collect information on diagnostic differences. The VTA upgrade was completed in June 2012 after several delays. The report also recommended that DOD and VA establish a committee to improve the accuracy of posttraumatic stress disorder ratings. DOD noted that training on diagnostic differences has been incorporated into its continuing medical education curriculum for military clinicians, but DOD considers the issue of posttraumatic stress disorder ratings largely resolved. Meanwhile, the Army’s new IDES narrative summary template includes a section where the MEB clinician identifies any inconsistencies in the case record, including any diagnostic differences with VA examiners. DOD and VA are working to remedy shortcomings in information systems that support the IDES process. These shortcomings include VTA’s lack of capability for local sites to track cases, and the potential for erroneous and missing data in VTA, affecting timeliness measurement. However, some efforts related to information systems are causing work inefficiencies, are still in progress, or otherwise are limited. Improving local IDES reporting capability: DOD and VA are implementing solutions to improve the ability of local military treatment facilities to track their IDES cases, but multiple initiatives may result in redundant work efforts. Officials told us that the VTA—which is the primary means of tracking the completion of IDES cases—has limited reporting capabilities and staff at local facilities are unable to use it for monitoring the cases for which they are responsible. DOD and VA developed VTA improvements that will allow DOD board liaisons and VA case managers— and their supervisors—to track the status of their cases. VA included these operational reporting improvements in its June 2012 VTA upgrade. In the meantime, staff at many IDES sites have been using their own local systems to track cases and alleviate limitations in VTA. Further, the military services have been moving ahead with their own solutions. For instance, the Army has deployed its own information system for MEBs and PEBs Army-wide. In addition, DOD has also been piloting its own tracking system at 9 IDES sites. As a result, staff at IDES sites we visited reported having to enter the same data into multiple systems. For example, board liaisons at Army MEBs Fort Meade and Joint Base Lewis- McChord reported entering data into VTA and the Army’s new system, while board liaisons at Andrews Air Force Base reported entering data into VTA and DOD’s pilot data system. Improving IDES data quality: DOD is taking steps to improve the quality of data in VTA. Our analysis of VTA data identified erroneous or missing dates in at least 4 percent of the cases reviewed. Officials told us that VTA lacks adequate controls to prevent erroneous data entry, and that incorrect dates may be entered, or dates may not be entered at all, which can result in inaccurate timeliness data. For example, Army officials noted that some cases shown in VTA as very old were actually closed, but were missing key dates. In September 2011, DOD began a focused effort with the services to correct erroneous and missing case data in VTA. Officials noted that the Air Force and Navy completed substantial efforts to correct the issues identified at that time, but Army efforts continue. DOD and Army officials noted that additional staff resources are being devoted to cleaning up Army VTA data. While improved local tracking and reporting capabilities will help facilities identify and correct erroneous data, keeping VTA data accurate will be an ongoing challenge due to a lack of data entry controls. While DOD is currently assisting the services, DOD officials said they expect that eventually the services will be responsible for identifying and fixing data errors. DOD and VA are also pursuing options to allow them to save time by replacing the shipping of paper case files among facilities with electronic file transfers. Requirements for an electronic case file transfer solution have been completed and DOD and VA officials expect to begin piloting it in August 2012. As a short-term solution, the Army and VA began using an Army file transfer Web site to move IDES records between the Army’s PEBs and the Seattle VA rating site in March 2012. According to VA officials, this could save several days currently spent shipping paper files between these offices. VA officials noted that the same Web site is being used for transfers between the Navy PEB and Providence rating site. Meanwhile, the secretaries of Defense and Veterans Affairs tasked their staffs to develop standards for electronic IDES case files by July 2012. Based on concerns of the Secretaries of DOD and VA about IDES delays, the departments have undertaken additional initiatives to achieve time savings for servicemembers. For example, in response to the secretaries’ February 2011 directive to streamline the process, DOD and VA officials proposed a remodeled IDES process. In December 2011, senior agency leadership decided to postpone the pilot of a remodeled IDES process, and instead tasked the agencies to explore other ways to streamline the process. As a result, DOD, with VA’s assistance, began a business process review to better understand how IDES is operating and identify best practices for possible implementation. This review incorporates several efforts, including visits to 8 IDES sites to examine how the process was operating and identify best practices.includes the following: Process simulation model: Using data from site visits and VTA, DOD is developing a simulation model of the IDES process. According to a DOD official, this process model will allow the agencies to assess the impact of potential situations or changes on IDES processing times, such as surges in workloads or changes in staffing. Fusion diagram: DOD is developing this diagram to identify the various sources of IDES data—including VA claim forms and narrative summaries—and different information technology systems that play a role in supporting the IDES process. Officials said this diagram would allow them to better understand and identify overlaps and gaps in data systems. Ultimately, according to DOD officials, this business process review could lead to short- and long-term recommendations to improve IDES performance, potentially including changes to the different steps in the IDES process, performance goals, and staffing levels; and possibly the procurement of a new information system to support process improvements. However, a DOD official noted that these efforts are in their early stages, and thus there is no timetable yet for completing the review or providing recommendations to senior DOD and VA leadership. DOD officials indicated that they expect this to be a continuous IDES improvement process, including further site visits. Finally, DOD is also developing guidance to expand implementation of an expedited disability evaluation process for servicemembers with catastrophic, combat-related conditions by allowing it to be operated at more military treatment facilities. DOD created this expedited process in January 2009 for servicemembers who suffer catastrophic, combat- related disabilities. Under an agreement with VA, the services can rate such members as 100 percent disabled without the need to use VA’s rating schedule. However, according to DOD officials, the services report that no eligible servicemembers are using this process. Instead, servicemembers are having their cases expedited through the IDES informally. The revisions to DOD’s policy would allow the expedited process to be used at additional military treatment facilities beyond the original 4 facilities.of a rewrite of DOD’s key guidance documents, and was undergoing review at the time of our review. By merging two duplicative disability evaluation systems, IDES shows promise for expediting the delivery of DOD and VA benefits to injured servicemembers and is considered by many to be an improvement over the legacy process it replaced. However, nearly 5 years after its inception as a pilot, delays continue to affect the system and the contribution of various, complex factors to timeliness is not fully understood. Recent efforts by DOD and VA to better understand how different IDES processes contribute to timeliness are promising and may provide the departments with an opportunity to reassess resource levels and timeframes, and to make adjustments if needed. This information will also help to ensure that DOD and VA are making the best use of limited resources to improve IDES performance. However, it is not clear when these efforts will be complete or if any recommended actions will be implemented. DOD has also begun rethinking its approach to determining servicemember satisfaction with IDES. Our analysis of customer satisfaction data suggests that there are opportunities for improving the representativeness of the survey information collected and reconsidering the cost-effectiveness of the current lengthy surveys. Finally, providing local facilities the capability to track and generate reports on the status of their cases is long overdue and may empower local staff to better address challenges. However, tracking reports are only as good as the data that are entered into VTA, and DOD and VA can ensure the quality of these data through continuous monitoring. Meanwhile, the DOD-led business process review should identify and ultimately eliminate any redundant or inefficient information systems for tracking cases as well as for other IDES purposes. 1) To ensure that servicemember cases are processed and are awarded benefits in a timely manner, we recommend that the Secretaries of Defense and Veterans Affairs work together to develop timeframes for completing the IDES business process review and implementing any resulting recommendations. 2) To improve DOD’s ability to measure servicemembers’ satisfaction with the IDES process, we recommend that the Secretary of Defense develop alternative approaches for collecting more meaningful and representative information in a cost effective manner. 3) To ensure that IDES management decisions continue to be based upon reliable and accurate data, we recommend that the Secretaries of Defense and Veterans Affairs work together to develop a strategy to continuously monitor and remedy issues with VTA timeliness information. This could include issuing guidance to facilities or developing best practices on preventing and correcting data entry errors; and developing reporting capabilities in VTA to alert facilities to potential issues with their data. We provided a draft of this report to DOD and VA for review and comment. In their written comments, which are reproduced in appendixes IV and V, DOD and VA both concurred with our recommendations. VA also provided technical comments, which we incorporated as appropriate. While concurring with our recommendations, DOD also commented that our discussion of IDES surveys contained inaccuracies, but did not specify the inaccurate information in our draft report. In a subsequent communication, DOD officials noted that our draft inaccurately described DOD’s decision to not survey veterans. We corrected this information accordingly. Further, while VA concurred with our recommendation that it work with DOD to develop timeframes for completing the IDES business process review and implementing any resulting recommendations, VA stated that DOD is leading the business process review, and therefore should develop the timeframes for completing the review. We have revised this report to clarify that DOD is leading the business process review, but we did not alter the recommendation because we believe that it is important for VA to work closely with DOD, including in developing review timeframes. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Veterans Affairs, and other interested parties. The report is also available at no charge on the GAO Web site at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix VI. In conducting our review of the Integrated Disability Evaluation System (IDES), our objectives were to examine (1) the extent to which the Departments of Defense (DOD) and Veterans Affairs (VA) are meeting IDES performance goals, and (2) steps DOD and VA are taking to improve IDES performance. We conducted this performance audit from May 2011 to August 2012, in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine the extent to which IDES is meeting established timeliness goals, we analyzed data collected through VA’s Veterans Tracking Application (VTA) database. While VA manages VTA, both DOD and VA staff enter data into VTA, and the evaluation of IDES data is primarily conducted by staff at DOD’s Office of Warrior Care Policy (WCP). WCP provided us with a dataset that was current as of January 1, 2012 and contained data spanning back to the inception of IDES in late 2007. This data export included data on a total of 39,260 cases. Of these cases, 34,185 were active duty servicemembers and 5,068 were Reserve/Guard servicemembers. This VTA data set contained demographic data for each individual IDES case as well as a record of dates for when servicemembers reached various milestones in IDES. Overall and interim IDES timeliness calculations are based on computing the number of days elapsed between appropriate milestone dates. For example, overall timeliness for servicemembers that receive benefits is calculated as the number of days between the individual being referred into the IDES and the date on which his or her VA benefits letter is issued. We met with staff at WCP to ensure we used appropriate variables when calculating timeliness. We also met with officials at VA to discuss the calculations used to determine the timeliness of cases. We took a number of steps to assess the reliability of VTA data and ultimately found the data to be sufficiently reliable for the purposes of our audit. Past GAO worknumber of steps to follow up on past assessments of VTA. relied on VTA data, and therefore we took a We interviewed DOD and VA and determined that internal controls on VTA data had not changed substantially since our past review. We conducted electronic testing of the VTA data and generally found low rates of missing data or erroneous dates pertinent to our analysis—approximately 4 percent of cases. For IDES cases in which we found missing dates or dates out of sequence, we excluded those cases from all of our analyses. While there were some instances in which the erroneous dates may be justified, we excluded the entire case from our analysis if any such dates appeared at any point in the VTA database. Such data included cases in which (1) there was no MEB referral date signifying the start of IDES process, and (2) the ending date preceded the beginning date of the IDES phase (resulting in timeliness calculations appearing as a negative amount of time). We also conducted a limited trace-to-file process to determine whether date fields in VTA were an accurate reflection of the information in IDES case files. Specifically, we compared VTA dates in 15 IDES cases completed in fiscal year 2011 against the dates in the corresponding paper files. In comparing dates, we allowed for a discrepancy of 5 days in dates to allow for the possibility that dates may have been entered into the database after an event took place. Ninety-three percent of the dates we traced back to the original file documents were found to be accurate, that is falling within our 5 day allowance. For the cases meeting our criteria for reliability, we analyzed timeliness data for those cases that had completed the entire IDES process or had completed each of the four IDES phases. We specifically: Identified the total number of cases enrolled each fiscal year from FY 2008 through 2011, by active as well as National Guard and reserve servicemembers, and by military branch of service. Identified the number of cases that completed the entire IDES process for each fiscal year from fiscal year 2008 through fiscal year 2011. We analyzed completed cases in two different ways: (1) those who completed the process and received VA benefits and (2) those who completed the IDES with any outcome (such as permanent retirement, Temporary Disability Retirement List, return to duty, etc.). In order to be able to make comparisons across cases with different outcomes for a given point in time, we defined fiscal year by using the VTA variable “final disposition date”. We did this because most completed cases—regardless of outcome—have a final disposition date in VTA. In contrast to our approach, VA use the “VA benefit date” variable to determine fiscal year of completion for cases resulting in benefits. As such, their number of cases and timeliness calculations by fiscal year differed from ours, although overall trends are similar. Identified the number of cases that completed each phase of IDES and the interim stages within each phase, again by fiscal year (fiscal years 2008 through 2011). Computed timeliness statistics for the completion of the IDES process, phases, and stages against the performance goals set by DOD and VA, such as average days and percent meeting goals. Computed number and percent of cases where a servicemember appealed a decision made during the IDES process, by fiscal year. For the purposes of this report, GAO opted to not include reserve component time spent in the VA benefit phase in our calculations for overall time because the 30 days allotted for this phase is not included in the 305-day overall goal for the reserve component. GAO also performed analyses similar to those above, except that we grouped cases according to the year in which they were enrolled in IDES. (See app. II for more detail on these analyses.) Additionally, we analyzed timeliness for cases that had not yet completed the MEB stage as of the date we received the VTA data. To determine the extent to which IDES is meeting its customer satisfaction goals, we analyzed data collected from IDES customer satisfaction surveys conducted at the end of three phases: MEB, PEB and Transition. These surveys are administered by telephone by contractors hired by DOD. The dataset we received contained survey responses for individual servicemembers from the beginning of the IDES pilot to December 2011, at which time administration of the survey was suspended. Additionally, we matched individual survey responses with information from VTA to gain additional understanding into how customer satisfaction varied according to different factors such as timeliness and case outcome. We matched survey and VTA data using the unique case identifier attached to each IDES case, maintaining the anonymity of the servicemembers. See appendix II for the results of additional analyses we conducted using survey data and survey data matched with VTA data. In the course of our review we concluded that the survey data were sufficiently reliable for our purposes. We interviewed relevant officials at DOD and their contractors about eligibility requirements and the administration of the surveys. Further, we met with DOD and their contractors on multiple occasions to discuss the calculations used to determine response rates for the survey and servicemembers’ level of satisfaction. See appendix II for more details on GAO’s review of response rates. To identify challenges in implementing IDES as well as steps taken to improve performance, we visited six military treatment facilities. During the site visits, we interviewed officials involved in implementing IDES from both DOD and VA, including military facility commanders and administrators, DOD board liaisons, military physicians involved in MEB determinations, DOD legal staff, VA case workers, VA or contract examiners, and administrators at VA medical clinics and regional offices. Additionally, we interviewed servicemembers who were currently enrolled in the IDES process. We selected the six facilities to obtain perspectives from sites in different military services, geographic areas, and their ability to meet timeliness goals for different phases of the process (see table 7). In addition, we visited the Air Force’s Formal Physical Evaluation Board at Randolph Air Force Base, Texas. During this visit we observed a hearing and met with board members to obtain a better understanding of the process. This appendix provides additional information on the timeliness of the IDES process and servicemember satisfaction with it. First, we use timeliness data to examine whether changes over time in processing times and the percentage of cases meeting timeliness goals look any different when cases are grouped according to the fiscal year in which the cases were first enrolled rather than the fiscal year in which the cases were completed. Second, we use survey data to examine different measurements of servicemember satisfaction with IDES, how satisfaction varied according to various servicemember characteristics, response and coverage rates for the servicemembers surveyed, and how the survey respondents differed from nonrespondents. With respect to timeliness, we find generally similar trends for cases grouped by fiscal year of enrollment versus fiscal year of completion, with some key differences. Organizing cases by completion date results in shorter average processing times in 2008, since only those cases that are processed quickly could be completed in the first year of IDES. As such, organizing cases by enrollment date provides a better estimate of the processing times for the early IDES cases. However, this approach results in shorter processing times in 2011, the most recent full year of the program, since only cases that finish quickly can be analyzed. With respect to satisfaction, we find that the particular index used to summarize servicemembers’ responses can affect the proportion reported as being “satisfied” or “dissatisfied” with IDES overall. DOD’s index suggests that 67 percent of servicemembers have been satisfied since the IDES program began, but a reasonable alternative measure we developed suggests that only 24 percent of servicemembers have been satisfied. Using this measure, satisfaction varies only slightly across many important groups of servicemembers, such as by disenrollment outcome, suggesting that available program data cannot precisely explain satisfaction outcomes. Also, servicemembers surveyed may not represent the servicemembers who completed the different phases of IDES well enough to generalize to them, given the low response rates to the MEB survey and fact that being selected for latter (PEB and Transition) surveys were conditional on completing the MEB survey. Average IDES processing times for completed cases resulting in benefits generally worsened since 2008, especially for active duty cases, regardless of whether cases are grouped by the fiscal year in which they were completed (fig. 10) or by the fiscal year in which they were enrolled (fig. 11). The notable exception is when fiscal year 2011 is the year of enrollment. However, caution must be used when examining cases enrolled in 2011 because over 15,600 service members of the 18,651 (or at least 84 percent) who entered IDES in fiscal year 2011 did not have an outcome in 2011 and were enrolled in IDES as of January 1, 2012, potentially changing the distribution of processing times as they proceed through IDES. We also examine average IDES processing times according to year of completion (see fig. 12) and year of enrollment for cases (see fig. 13) for all completed cases regardless of outcome. As with cases that resulted in benefits, for cases resulting in any outcome we find that average processing times increased since 2008—again with the exception of fiscal year 2011 for reasons discussed earlier—although average processing times are somewhat shorter than when only servicemembers receiving benefits are included (fig. 11). Figures 14 and 15 show that regardless of whether cases are organized by year of completion or enrollment, the percent of completed cases resulting in benefits that were not timely increased between fiscal year 2008 and 2010 for both active duty servicemembers and members in the Reserves or National Guard. As with the average processing times, caution must be used when examining cases enrolled in fiscal year 2011 (fig. 15), since only those cases that are processed quickly are observed in the last year. Similarly, caution also must be used when examining cases in 2008 (fig. 14), since the only cases that are included in the first year are those that completed IDES quickly. Figures 16 and 17 show how average processing times for each of the four phases of IDES have changed over the four fiscal years when cases are grouped by the fiscal year in which they completed a given phase and when cases are grouped by the fiscal year in which they were enrolled or started a given phase.according to the fiscal year in which the different phases were completed, processing times increased for all phases except the Transition phase. Figure 17 shows a roughly similar pattern of increases in processing times in all but the Transition phase, though processing times in 2011 are skewed for the reason mentioned above. Figures 18 and 19 show how the percentages meeting the timeliness goals for each of the four phases of IDES have changed over the four fiscal years when cases are grouped by the fiscal year in which they completed a given phase and when cases Figure 16 shows that when cases are grouped are grouped by the fiscal year in which they were enrolled or started a given phase. Figure 18 shows that the percent of cases meeting timeliness goals decreased over the four years for the MEB and PEB phases, although a high percent of cases met PEB goals. However, the Transition and Benefits phases fluctuated up and down and both were favorable across some years. Figure 19 also shows decreases in percentages of cases meeting timeliness goals at the MEB and PEB phases when cases are grouped by fiscal year of starting a given phase. The fluctuations in the timeliness of the Transition and Benefits phases were more prevalent when cases were grouped in this manner. Low response and coverage rates for servicemember satisfaction surveys administered after each phase of IDES raise concerns about how well the satisfaction survey results represented the larger population of servicemembers who completed one or more phases. DOD surveys servicemembers after they complete the MEB, PEB, and Transition phases of IDES. The department attempts to survey all servicemembers who complete each phase, but only if they completed the prior surveys. For example, the MEB survey must be completed before a servicemember is eligible to complete the PEB survey. Using the data available to us, and as table 8 below shows, we found that 9,604 of the 25,212 servicemembers who completed the MEB phase were surveyed, for a 38 percent response and coverage rate.18,296 servicemembers who completed the PEB phase, only 8,968 of them completed the prior MEB survey and were eligible for the PEB survey and of these only 4,795 were surveyed. Using DOD’s eligibility criteria, the response rate for the PEB survey was roughly 54 percent (4,795 of 8,968). However, the coverage rate for all servicemembers who completed the PEB phase (regardless of whether they completed the prior survey) was only 26 percent (4,795 of 18,296). Similarly, the response rate for the Transition survey was 72 percent while the coverage rate was only 23 percent (See table 8). As table 9 below shows, there were some sizable differences between respondents and nonrespondents, especially for the PEB and Transition surveys. For example, respondents to the transition survey spent more time than nonrespondents in the Transition phase, were less likely to be separated with benefits, and were more likely to be placed on the Permanent Disability Retired List. These differences, combined with low response and coverage rates, raise the possibility of biased responses. The particular measure used to assess servicemember satisfaction can affect the proportion reported as “satisfied” with the IDES program. Depending on the measure used, satisfaction is about 2.8 times lower than what DOD has reported, and many servicemembers classified as “satisfied” express moderate dissatisfaction with some aspects of the process. DOD has reported average servicemember satisfaction with IDES overall and with three phases of the process, i.e., MEB, PEB, and Transition phases. In so doing, DOD has developed indices of satisfaction on several broad dimensions, such as satisfaction with the overall experience and fairness, which combine responses to selected survey questions. Although the number of questions used in each index vary depending on the number of phases completed, each index classifies servicemembers as “satisfied” or “dissatisfied” using the average of their responses across all questions in the index. Each question’s scale ranges from 1 to 5, with 1 denoting “very dissatisfied” (or a similar negative response), 5 denoting “very satisfied,” and 3 denoting “neither satisfied nor dissatisfied.” DOD reports that a servicemember is “satisfied” if his or her average response across all items in the scale exceeds 3. Table 10 summarizes the responses to each question that DOD uses in its overall satisfaction index at each phase (as of August 2011). DOD’s indices are one reasonable method of summarizing servicemember opinions. In quarterly performance reports, DOD notes that it has used factor analysis, a form of latent variable statistical models to assess the reliability of its scales. While we did not review DOD’s models, we independently found that DOD’s overall index of satisfaction with IDES was highly reliable. (Specifically, using Cronbach’s alpha, the index was highly correlated with a single latent dimension at α = 0.92.) This supports DOD’s choice to measure the single concept of “satisfaction” by averaging the ordinal servicemember responses. Nevertheless, the average survey response can obscure variation in the responses that make up the index. For example, suppose that a servicemember said she was “very satisfied” (response of 5) on two of the four questions in the index, “dissatisfied” on one (response of 2) and “very dissatisfied” on the last one (response of 1). With an average response over 3, the DOD measure would classify her as “satisfied,” despite the fact that she was “somewhat dissatisfied” or “very dissatisfied” with two of the four aspects of IDES that DOD considers important. The grouping rule considers this servicemember equally happy with IDES as someone who says they are “satisfied” with all four aspects in the index. To assess the extent to which DOD’s index might mask dissatisfaction, we calculated the proportion of questions in the scale on which servicemembers whom DOD classified as “satisfied” gave neutral or negative responses (1, 2, or 3). We found that half of these servicemembers gave neutral or negative answers to at least 25 percent of the items in the index, and a quarter gave such answers to at least 41 percent of the items. For these servicemembers, the DOD index may suggest more satisfaction than the underlying survey questions would support. We further assessed the sensitivity of DOD’s index by comparing it against a different (i.e., GAO’s) measure of satisfaction: whether a servicemember is “somewhat” or “very satisfied” (or gives a similarly positive response) on all items in DOD’s scale of overall IDES satisfaction. Our measure is more conservative than DOD’s, because ours only includes positive responses and uses a broader cutpoint (two response categories) to distinguish between “satisfied” and “not satisfied” servicemembers. (In contrast, DOD calculates average satisfaction on an ordinal scale of 1 to 5, and then uses a cutpoint at 3.) Our measure is not inherently more valid, however, and has its own weaknesses. In particular, we classify a servicemember as “not satisfied” if she gives a neutral or negative response to just one of the four items in DOD’s scale. When we analyzed overall satisfaction using both measures, we found that overall, servicemembers are 2.8 times less satisfied on our measure than on DOD’s (i.e., 23.8 versus 67 percent). Further, only about 20 to 30 percent of servicemembers are “satisfied” with each aspect of the IDES process that DOD considers important across most of the subgroups we analyzed, while DOD classifies about 60 to 70 percent of such servicemembers as “satisfied” on average. In the next section, we present further information on variation in satisfaction across servicemember groups. Although the servicemember survey provides numerous measures of satisfaction, it is also important to explain variation in satisfaction outcomes—i.e. why some servicemembers are more satisfied than others. Explaining variation can connect dissatisfaction with poor program performance and help identify specific reforms to improve the experiences of servicemembers who typically have been less satisfied. However, the available program data cannot precisely explain outcomes when used in this type of explanatory analysis. Using the available data, we could predict satisfaction only 1.9 percentage points better after controlling for multiple factors than what we would have achieved by chance (65.5 percent vs. 63.7 percent of satisfied responses predicted correctly). In order to further explain variation in satisfaction, we matched the survey responses to the data that DOD and VA maintain on the processing of each servicemember’s case, known as the VTA data. This database primarily measures the time it took servicemembers to complete each phase of the IDES process. A small number of other program and demographic variables are also available, such as service branch, component, and the number of conditions claimed and referred. Using the matched survey and VTA data, we estimated the association between satisfaction and observable factors that could potentially explain variation in servicemembers’ experiences. Table 11 below (columns 2-4) presents these associations for both DOD’s and GAO’s overall measures of satisfaction. The “raw data” estimates are simply the proportion of servicemembers in a particular group who were satisfied according to either measure. In the fourth column (“model estimates”), we estimate this proportion holding constant all of the other factors listed, using a statistical model. Specifically, the estimates are in- sample mean predicted probabilities of giving a satisfied response on the GAO satisfaction index from a logistic model of satisfaction. The covariates are given by indicators of whether the servicemember belonged to each group in column 4. The maximum likelihood estimators allowed the probability of satisfaction, given the covariates, to be dependent across observations within the 26 cross-classified groups of PEB location and MEB medical treatment facility. This adjusted for the possibility that servicemembers were similarly satisfied if they were processed in the same locations, given similar values on the observed covariates. Regardless of which measure is used (DOD’s or GAO’s), satisfaction varied only modestly across many important groups of servicemembers. Our model estimates that the GAO measure of satisfaction varied by no more than approximately five percentage points across branch, component, disenrollment outcome, sex, MEB exam provider, enlisted and officer personnel classes, and the number of claimed and referred conditions, although differences across MEB treatment facilities and PEB locations were larger. This can be seen as a positive outcome, if this correlation implies that DOD and VA administer the program consistently across servicemembers and locations. However, the lack of variation also could suggest that the survey items do not measure opinions in enough detail to discriminate among servicemembers’ experiences. Also shown in table 11, satisfaction had a stronger association with case processing time (time spent in IDES) than some of the other factors we examined. Servicemembers whose case processing times were among the quickest 25 percent were about 2.3 times as likely to be satisfied (on the GAO scale) than those whose times were among the 25 percent of cases with the longest overall timeframes (i.e., 41 versus 18 percent). Nevertheless, only 41 percent of those servicemembers whose cases were processed most quickly were satisfied (holding constant the other factors). This suggests that servicemembers’ opinions about IDES may be only loosely related to the amount of time they spent in IDES, as discussed in the next section below. Although the average case processing time has generally increased since 2008, when we look at satisfaction by fiscal year, servicemember satisfaction shows evidence of improvement since fiscal year 2008. Specifically, our measure of satisfaction from the model increased by 15 percentage points since 2008, roughly doubling from 13 to 28. Because the model estimates control for various other factors, these results suggest that servicemember views of the IDES process have improved over time, rather than the possibility that IDES has simply processed different types of cases. Satisfaction does not vary by a large amount across many MEB treatment facilities, but there are exceptions. Our model estimates that about 18 to 26 percent of servicemembers were satisfied at most facilities. However, there were pockets of greater satisfaction. Specifically, servicemembers had more positive experiences at Forts Belvoir, Bragg, Campbell, Drum, Hood, and Polk, with satisfaction estimated to have ranged from 28 to 45 percentage points. Fort Meade had the lowest satisfaction at 15 percent. These estimates hold constant time spent in IDES and other factors in column 4 and, thus, partially account for the types of cases each facility processes. DOD and VA measure IDES timeliness directly in VTA and as part of the overall servicemember satisfaction scale. These overlapping measures let us compare servicemembers’ opinions to their actual experiences in the program. To do this, we calculated processing times at each phase of IDES for servicemembers who expressed varying degrees of satisfaction with the timeliness of their case processing at that phase. In addition, we analyzed whether servicemembers who were satisfied with the overall IDES process were more or less likely to meet timeliness goals. Table 12 provides these statistics. As shown in table 12, satisfaction generally stayed the same or decreased as processing times increased. The median days spent in the MEB and PEB phases were 35 and 38 percent lower, respectively, among those servicemembers who said that MEB and PEB timeliness was “very good” as compared to those who said it was “very poor.” The former group was 170 percent more likely to have met the MEB timeliness goal and 30 percent more likely to have met the PEB timeliness goal. Similarly, the case for a median servicemember—whom we classified as “satisfied” with the overall IDES process—was completed 15 percent more quickly and was 49 percent more likely to have met the timeliness goal than the median servicemember who was “dissatisfied.” The model estimates in table 11 confirm that the GAO measure of satisfaction and timeliness (time spent in IDES) are negatively related even when holding constant several other variables. Perceived and actual timeliness had little association at the Transition phase. Across all levels of satisfaction with timeliness, the median processing time varied by no more than 4 days, and the proportion meeting the timeliness goal varied by no more than 4 percentage points. The use of personal leave is one plausible explanation for the unresponsiveness of servicemember satisfaction to actual processing times in the Transition phase. A servicemember might not have been dissatisfied with delays if taking leave was the reason, rather than the IDES process itself. Despite the associations between actual and perceived timeliness at the MEB and PEB phases, there were many servicemembers who were satisfied or dissatisfied with timeliness that spent similar amounts of time in the program. For example, 68 percent of those who said that PEB timeliness was “very poor” completed the phase on time, and 55 percent of those who said that MEB timeliness was “very good” did not complete on time. Among servicemembers who said that MEB timeliness was “very good,” the middle 80 percent of processing times ranged from 62 days to 223 days. The same range for servicemembers who said MEB timeliness was “very poor” was 88 to 323 days. As table 12 shows, a similar pattern holds for the PEB phase. Although servicemembers tend to be more satisfied in MEB and PEB when their cases take less time, many of them are highly dissatisfied even when their cases take an unusually short amount of time (and vice versa). In the Transition phase, however, 40 percent of servicemembers who said that timeliness was “very good” were processed in 91 to 657 days—a more lengthy range than at the other phases. The large range and relationship with satisfaction may reflect the use of servicemember leave. The fact that many servicemembers are similarly satisfied with timeliness, even though they can have widely different processing times, has broader implications for measuring the performance of IDES. DOD’s timeliness goals may not be meaningful to servicemembers or necessarily reflect high-quality service. Alternatively, servicemembers may not use reasonable standards to assess the time required to process their cases, or they may not accurately perceive the time they have spent in the program. In these scenarios, the value of servicemember satisfaction as a performance measure becomes less certain. The relationship between perceived and actual timeliness may simply reflect a large amount of unobserved heterogeneity across servicemembers. For example, a servicemember whose case has been in IDES for an extremely long time might still be highly satisfied with timeliness if the case was complex or personal leave was taken during the process. Neither the survey nor the VTA data measure these or other such characteristics that might affect the program’s key performance measures. The lack of variation in satisfaction across servicemember groups and according case timeliness might be seen as a positive outcome, and may suggest that DOD and VA administer the program consistently across servicemembers and locations. However, the lack of variation also could suggest shortcomings in the design and administration of the survey, or in data limitations that, alone or together, may reduce the usefulness of survey data for program evaluation. For example: Survey questions: The survey questions may not be sufficiently detailed to measure important differences among servicemembers’ experiences. For example, the survey includes 12 questions (4 per survey) that measure broad opinions about IDES, and DOD subsequently averages these responses together. This approach may limit the survey’s capacity to describe IDES experiences in sufficient detail. Precision of DOD indices: DOD reports measures of overall satisfaction with IDES for each phase, using the questions in table 9. However, these measures include one question that asks respondents to “evaluate their overall experience since entering the IDES process,” which could be influenced by experiences in prior phases. Consequently, the satisfaction measures reported for each phase could represent a combination of servicemembers’ experiences in that phase and prior phases. Completing two surveys at once: DOD officials told us that a servicemember may be surveyed for the PEB and Transition phases in one session. In these instances a large amount of time may have passed since the servicemember completed the PEB phase and it may be more difficult for the servicemember to isolate his or her satisfaction with a particular phase. Survey design: The satisfaction survey is primarily designed to measure performance, not explain it. The survey includes many highly correlated questions measuring satisfaction with the overall process or broad components of it, such as DOD board liaisons, VA case managers, or timeliness. While multiple questions can improve the statistical reliability and validity of DOD’s performance measures, they require costly survey administration time that could be used for other purposes, such as to measure a larger number of variables that could explain servicemember satisfaction or case processing times. VTA data limitations: The VTA administrative data that we matched to survey data primarily measure processing times and basic servicemember demographics, such as service branch, component, and treatment facility. The data support detailed reporting of performance measures, but they do not measure similarly detailed information on the nature of each case that might allow DOD and VA to understand the reasons for lengthy case processing times or to identify cases that might become delayed and ensure that they remain on schedule. For example, the database does not measure the type or severity of referred medical conditions in detail, the nature of delays experienced early in the process, or the use of servicemember leave. In addition, little information is available on staffing at or caseloads for MEB and PEB locations, DOD board liaisons, or VA case managers, which might help to explain or predict performance. Low response and coverage rates: The response and coverage rates of the satisfaction survey further limit the degree to which DOD can generalize the data obtained to the population of servicemembers who participate in IDES. In particular, the survey does not assess the views of servicemembers who disenroll from the process before finishing a stage or those who do not complete prior waves of the survey. Including servicemembers who do not complete all waves would complicate longitudinal analysis, however. Table 13 presents data reported by DOD on average processing time for active duty cases completed during part of fiscal year 2012—Oct. 2011 to June 2012. DOD’s data are provided as a supplement the analyses GAO conducted for fiscal years 2008 through 2011. We did not evaluate the reliability of these data and cannot predict the extent to which any trends will continue for the rest of the fiscal year. Michele Grgich (Assistant Director), Daniel Concepcion, Melissa Jaynes, and Greg Whitney made significant contributions to all aspects of this report. Also contributing to this report were Bonnie Anderson, James Bennett, Mark Bird, Joanna Chan, Brenda Farrell, Jamila Jones Kennedy, Douglas Sloane, Almeta Spencer, Vanessa Taylor, Jeffrey Tessin, Roger Thomas, Walter Vance, Kathleen van Gelder, and Sonya Vartivarian. Military Disability System: Preliminary Observations on Efforts to Improve Performance. GAO-12-718T (Washington, D.C.: May 23, 2012). Military and Veterans Disability System: Worldwide Deployment of Integrated System Warrants Careful Monitoring. GAO-11-633T (Washington, D.C.: May 4, 2011). Military and Veterans Disability System: Pilot Has Achieved Some Goals, but Further Planning and Monitoring Needed. GAO-11-69 (Washington, D.C.: December 6, 2010). Military and Veterans Disability System: Preliminary Observations on Evaluation and Planned Expansion of DOD/VA Pilot. GAO-11-191T (Washington, D.C.: November 18, 2010). Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213 (Washington, D.C.: January 29, 2010). Recovering Servicemembers: DOD and VA Have Jointly Developed the Majority of Required Policies but Challenges Remain. GAO-09-728 (Washington, D.C.: July 8, 2009). Recovering Servicemembers: DOD and VA Have Made Progress to Jointly Develop Required Policies but Additional Challenges Remain. GAO-09-540T (Washington, D.C.: April 29, 2009). Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137 (Washington, D.C.: September 24, 2008). DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T (Washington, D.C.: February 27, 2008). DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers. GAO-07-1256T (Washington, D.C.: September 26, 2007). Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362 (Washington, D.C.: March 31, 2006).
|
Since 2007, DOD and VA have jointly operated IDES--which is intended to expedite benefits for injured servicemembers. IDES replaced the departments' separate processes for evaluating servicemembers for fitness and disability. Initially a pilot at 3 military treatment facilities, IDES is now in place at military treatment facilities worldwide. In previous reports, GAO identified a number of challenges as IDES expanded to more facilities, including staffing shortages and difficultly meeting timeliness goals. In light of IDES' expansion, GAO was asked to examine: (1) the extent to which DOD and VA are meeting IDES timeliness and servicemember satisfaction performance goals, and (2) steps the agencies are taking to improve IDES performance. GAO analyzed IDES timeliness and customer satisfaction data, visited six IDES sites with varying performance, and interviewed DOD and VA officials. Case processing times under the Integrated Disability Evaluation System (IDES) have increased over time, and measures of servicemember satisfaction have shortcomings. Since 2008, annual average processing times for IDES cases have steadily climbed, while the percentage of cases meeting established timeliness goals declined. Average case processing times reached 394 and 420 days for active and reserve component members in fiscal year 2011--compared to goals of 295 and 305 days, respectively, and just 19 percent of active duty and 18 percent of guard or reserve servicemembers completed the process and received benefits within established goals. Of the four phases comprising IDES, the medical evaluation board phase increasingly fell short of timeliness goals, while the physical evaluation board phase, although meeting goals, was taking increasingly more time to complete. With respect to servicemember satisfaction with the IDES process, GAO found shortcomings in how these data are collected and reported, such as unduly limiting who is eligible to receive a survey and computing average satisfaction scores in a manner that may overstate them. Department of Defense (DOD) officials told GAO they are considering alternatives for gauging satisfaction with the process. DOD and Veterans Affairs (VA) are taking steps to improve IDES performance, but progress to date is uneven and it is too early to assess their overall impact. For example, VA increased resources for completing exams and disability ratings while the Army is hiring additional staff for its medical evaluation boards. VA has met exam timeliness goals in the past several months, but other resources have yet to translate into lower processing times. DOD and VA are pursuing system upgrades so that staff and managers at IDES facilities can better track and manage the progress of servicemembers' cases. IDES officials have been working with the military services to correct case data that were inaccurately entered into VA's IDES tracking system, but have not yet identified a permanent solution to improve the accuracy of data input. Finally, DOD, with VA's assistance, is in the early stages of an in-depth review of the entire IDES process and its supporting IT systems. This effort is intended to improve understanding of how each step contributes to overall processing times and identify opportunities to streamline the process and supporting systems. However, timeframes for completing the review or issuing recommendations have yet to be established. To improve monitoring of IDES timeliness and satisfaction, GAO recommends that DOD and VA work together to (1) develop plans for completing the ongoing business process review and implementing any resulting recommendations and (2) improve the accuracy of case information at the point of data entry; and that (3) DOD consider alternative approaches to measuring satisfaction. DOD and VA concurred with GAO's recommendations.
|
The Forest Service is responsible for managing over 192 million acres of public lands—about 30 percent of all federal lands in the United States. In carrying out its responsibilities, the Forest Service traditionally has administered its programs through 9 regional offices, 155 national forests, 20 national grasslands, and several hundred ranger districts. Figure 1 shows a map of the Forest Service regions and national forests. To sustain the health, diversity, and productivity of the nation’s forests, the Forest Service can propose land management projects that may change the existing condition of vegetation—projects referred to as vegetation management. Vegetation management projects may include, but are not limited to, activities such as using prescribed burning, timber harvests, or herbicides; or thinning trees, grass, weeds, or brush. Projects that include these types of activities are intended to, among other things, maintain healthy ecosystems, reduce the risk of catastrophic wildland fire, and manage the nation’s forests for multiple uses, such as timber, recreation, and watershed management. Under NEPA, agencies such as the Forest Service generally evaluate the likely environmental effects of projects they propose using an EA or, if the projects likely would significantly affect the environment, a more detailed EIS. However, an agency generally need not prepare an EA or EIS if it determines that activities of a proposed project fall within a category of activities the agency has already determined have no significant environmental impact—called categorical exclusions. The agency may then approve projects fitting within the relevant categories using these predetermined categorical exclusions rather than carrying out a project- specific EA or EIS. For a project to be approved using a categorical exclusion, the Forest Service must determine whether any extraordinary circumstances exist in which a normally excluded action or project may have a significant effect.4, 5 To establish categorical exclusions, the Forest Service must determine that the categories of activities proposed for exclusion do not individually or cumulatively have a significant impact on the environment. In doing so, the public is to be provided an opportunity to review and comment on proposed categorical exclusions. Resource conditions that should be considered in determining whether extraordinary circumstances exist include, among other things, the existence of federally listed threatened or endangered species or designated critical habitat; congressionally designated wilderness areas; inventoried roadless areas; and archaeological sites or historic properties. The mere presence of one or more of these conditions does not preclude the use of a categorical exclusion. Rather, it is the degree of the potential effect of the proposed action on these conditions that determines whether extraordinary circumstances exist. The Forest Service may decide to prepare an environmental assessment for a project that could qualify for approval using a categorical exclusion. Figure 2 shows the NEPA process the Forest Service generally follows for assessing the likely environmental impacts of land management activities. As of 2003, the Forest Service had one categorical exclusion for use in approving projects involving certain vegetation management activities— timber stand or wildlife habitat improvement—that, still today, has no acreage limitation. In 2003, after reviewing and evaluating data on the environmental effects of vegetation management projects that had been carried out by the national forests, the Forest Service added four new vegetation management categorical exclusions, each of which has acreage limitations: (1) hazardous fuels reduction, (2) limited timber harvests of live trees, (3) salvage of dead or dying trees, and (4) removal of trees to control insects and disease. Table 1 summarizes the Forest Service’s five vegetation management categorical exclusions, including the four approved in 2003, along with any corresponding acreage limitations. (App. II provides a complete list of the Forest Service’s categorical exclusions.) The Forest Service requires agency officials responsible for making vegetation management project decisions to prepare and retain a file and decision memo for each vegetation management project approved using a categorical exclusion. Decision memos are to include, among other information, the title of each proposed action, an outline of the decision being made, a description of the public’s involvement in the decision- making process, and the date for implementing the project. Controversy has surrounded the Forest Service’s use of vegetation management categorical exclusions because, on the one hand, critics assert that the use of categorical exclusions is an attempt to circumvent NEPA by precluding the need to perform an EA or EIS. Supporters, on the other hand, have responded that current analysis and documentation requirements for an EA or EIS under NEPA are too burdensome and that the new categorical exclusions allow the Forest Service to more efficiently undertake routine vegetation management activities. Adding to this controversy, the Forest Service initially did not subject projects being approved using the five vegetation management categorical exclusions to a formal notice, comment, and appeal process as it did to projects being approved using an EA or EIS. As a result of litigation, the Forest Service now requires that vegetation management projects being approved using these categorical exclusions be subject to formal notice, comment, and appeal. Critics argue that such public involvement is essential for responsive decision making, while others argue the formal appeal process is unnecessarily burdensome and prevents the Forest Service from undertaking routine vegetation management activities in a timely manner. The debate surrounding the use of categorical exclusions centers on the types of vegetation management projects approved using categorical exclusions, how often the categorical exclusions are used, and how many acres are treated when using them. For calendar years 2003 through 2005, as shown in table 2, the Forest Service approved about 3,000 vegetation management projects to treat about 6.3 million acres. Of these projects, the Forest Service approved about 70 percent using categorical exclusions and the remaining projects using an EA or EIS. Although a majority of projects were approved using categorical exclusions, these projects accounted for slightly less than half of the total treatment acres because the size of these projects was much smaller than those approved using an EA or EIS. Our analysis of the project data also revealed that the total number of vegetation management projects approved, including those approved using categorical exclusions, varied over the 3-year period, while the number of treatment acres was relatively constant. As can be seen in figure 3, the number of projects approved using an EA or EIS varied little over the 3- year period; however, the number of projects approved using categorical exclusions increased from January 2003 through December 2004— primarily because of increased use of the four new categorical exclusions—and then decreased from January through December 2005. Forest Service officials said that any number of factors could have influenced the increase and subsequent decrease in the use of categorical exclusions over the 3-year period. However, given the relatively short period of time during which the four new categorical exclusions were in use, these officials said that it was not possible to speculate why the decrease had occurred. In contrast, as can be seen in figure 4, an analysis of the total treatment acres included in projects approved using an EA, EIS, or categorical exclusion did not reveal any notable change over the 3-year period. Additional analyses of the project data also revealed that the number of vegetation management projects approved, including those approved using categorical exclusions, varied by Forest Service region and forest. For example, of all vegetation management projects nationwide, Region 8—the Southern Region—accounted for about 29 percent, of which just over two- thirds were approved using categorical exclusions. In contrast, Region 10— Alaska—accounted for about 2 percent of all vegetation management projects, about 60 percent of which were approved using categorical exclusions. According to several Forest Service officials, the number of vegetation management projects approved and the type of environmental analysis used in approving them depended on the forest’s size, ecology, and location, as can be seen in the following examples: At the 2 million-acre Superior National Forest, a pine, fir, and spruce forest, in rural northeastern Minnesota, forest officials relied more on environmental assessments and environmental impact statements in approving projects because most of the projects were larger in terms of geographic coverage and more inherently complex; they used categorical exclusions only for a few smaller scale projects or projects undertaken in response to unanticipated events such as a wind storm that blew down trees on several hundred thousands of acres and that subsequently needed to be removed to reduce the risk of wildland fire. Of the 13 projects approved, forest officials used either environmental impact statements or environmental assessments in approving 8 and categorical exclusions in approving 5. At the 1.8 million-acre Ouachita National Forest, a pine and hickory forest in western Arkansas and southeastern Oklahoma, 163 projects were approved—of which 119 were approved using categorical exclusions. Forest officials said the forest has a very active vegetation management program because, among other things, the types of trees located on the forest tend to regenerate quickly and are an excellent product for milling. In addition, a large timber harvest infrastructure is located nearby, which helps ensure that timber-sale contracts can be readily competed and awarded. At the 440,000-acre Cleveland National Forest, a mixed conifer and hardwood forest in Southern California, Forest Service officials said they prepared an EA or EIS infrequently for managing vegetation because the projects were necessarily small, given the forest’s limited size. Cleveland forest officials approved all of its 18 projects using categorical exclusions for calendar years 2003 through 2005. At the 28,000-acre Caribbean National Forest, a humid tropical forest in Puerto Rico, no vegetation management projects were approved. According to forest officials, the forest does not have an active vegetation management program because the forest focuses more on developing recreational sites and wildlife habitat and because the island does not have a commercial infrastructure to support harvesting or milling timber. Appendixes III and IV provide detailed information on the number of vegetation management projects and acres approved using different types of environmental analyses for calendar years 2003 through 2005. Of the almost 2,200 projects approved using categorical exclusions over the 3-year period, the Forest Service most frequently used the vegetation management categorical exclusion for improving timber stands or wildlife habitat; this categorical exclusion was used on half of the projects to treat about 2.4 million acres. As can be seen in table 3, for the remaining projects, the Forest Service primarily used the categorical exclusion for reducing hazardous fuels, followed by salvaging dead or dying trees, conducting limited timber harvests of live trees, and removal of trees to control the spread of insects or disease; in all, these categorical exclusions were used to approve treatments on about a half-million acres. In addition, the size of approved projects varied depending on the categorical exclusion and any associated acreage limitation. According to Forest Service officials, a number of factors influenced why the categorical exclusion for timber stand or wildlife habitat improvement was the most frequently used for the most treatment acreage. For example, Santa Fe National Forest officials said that the forest has relied heavily on this exclusion because it does not have an acreage limitation. Also, officials at the George Washington and Jefferson National Forests and the Monongahela National Forest said they relied on this categorical exclusion more than others because the use of this category was consistent with their forest management plans, which dictate the types of activities that may take place on their forests. Further, Okanogan-Wenatchee National Forests officials said they rely primarily on the timber stand or wildlife habitat improvement categorical exclusion because of its long-standing use and the beneficial nature of projects being undertaken, which enhances their public acceptability. We also analyzed the categorical exclusion for timber stand or wildlife habitat improvement to determine whether its lack of size limitation resulted in projects that are larger than those undertaken using the other four exclusions. As can be seen in table 4, we found that almost 92 percent of the projects approved using the categorical exclusion for timber stand or wildlife habitat improvement were smaller than 5,000 acres—which is the approximate size limitation of the categorical exclusion for hazardous fuels reduction, the largest size limitation of the four more recent categorical exclusions. Appendixes V and VI provide detailed information on the number of vegetation management projects and acres approved using different categorical exclusions for calendar years 2003 through 2005. Of the 509 ranger districts, 11 percent had not used any of the five vegetation management categorical exclusions during the 3-year period. As can be seen in table 5, the percentage of ranger districts that did not use specific categorical exclusions ranged widely, from 91 percent not using the category for the removal of trees to control the spread of insects or disease, to 32 percent not using the category for timber stand or wildlife habitat improvement. Reasons cited by the ranger districts also varied: The primary reasons cited for not using the category for the removal of trees to control the spread of insects or disease were the lack of insect- or disease- infested trees and that projects that could have fit the category had already been or were to be included in an EA or EIS. Similarly, the primary reasons cited for not using the category for timber stand or wildlife habitat improvement were that projects that could have fit the category had already been or were to be included in an EA or EIS and no projects were undertaken to improve timber stands or wildlife habitat. Ranger district officials we interviewed offered some reasons why vegetation management categorical exclusions may not be used: The Laurentian Ranger District, located in northern Minnesota in the Superior National Forest, did not use the categorical exclusion for the removal of trees to control the spread of insects or disease because, according to district officials, it had no insect- or disease-infested trees suitable for harvest. The Tonasket Ranger District, located in north-central Washington in the Okanogan-Wenatchee National Forests, had not used the categorical exclusion for the removal of trees to control the spread of insects or disease because, according to district officials, the 250-acre size limitation of the categorical exclusion constrains its use because the district has huge areas infested with beetles and mistletoe. To be effective, any salvage would have to cover a much larger area. The Canyon Lakes Ranger District, located in north-central Colorado in the Arapaho-Roosevelt National Forests, had not used the categorical exclusion for timber stand or wildland habitat improvement. According to ranger district officials, this categorical exclusion was not used because project planning typically consists of conducting an EA or EIS. These types of environmental analysis allow the district to better evaluate multiple activities over larger geographic areas using a single analysis—which is more efficient than approving different projects using several vegetation management categorical exclusions. The Mountainair Ranger District, located in central New Mexico in the Cibola National Forest, had not used the categorical exclusion for limited timber harvests of live trees primarily because, according to district officials, the state lacked a commercial timber industry that is capable of harvesting and milling timber. District officials also said that timber harvests would have to be much larger than 70 acres and include much larger diameter trees, to be profitable and attract timber companies from out of state. Appendix VII provides more detailed information on the primary reasons cited by the ranger districts for not using vegetation management categorical exclusions for calendar years 2003 through 2005. Because four of the five categorical exclusions have been available only for the past 3 years, it is premature to draw any conclusions about trends in their use. More information, over a longer period of time, is necessary to better determine how the agency is using categorical exclusions, what types of projects are being approved, and which forests are using them. More importantly, such information will be useful in addressing some of the controversial issues surrounding the use of categorical exclusions in approving projects, such as whether these projects, individually or cumulatively, have any significant effect on the environment or whether their use is enabling more timely Forest Service vegetation management. We received written comments on a draft of this report from the Forest Service. The Forest Service generally agreed with our findings and observations, and specifically that it is premature to extrapolate trends given the studied categorical exclusions’ limited period of use. The agency provided us with technical comments that we have incorporated, as appropriate. The Forest Service’s letter is reprinted in appendix VIII. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Secretary of Agriculture, the Chief of the Forest Service, and other interested parties. We will also make copies available to others upon request. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. We were asked to determine how many vegetation management projects the Forest Service approved for calendar years 2003 through 2005, including those approved using categorical exclusions, and the number of associated acres proposed for treatment. To obtain this information, we developed a questionnaire addressed to forest supervisors. A questionnaire was used because the Forest Service has no centralized data on the (1) number of vegetation management projects that were undertaken for calendar years 2003 through 2005, or how many acres were proposed for treatment under these projects; (2) projects that were approved using categorical exclusions, which categorical exclusions were used and the associated acres being treated; or (3) reasons why categorical exclusions were not used. While the Forest Service has a national database—the Planning, Appeals, and Litigation System—that provides some information on projects and the types of environmental analysis used in approving the projects, the system does not generally include data prior to January 2005 or the number of treatment acres. Because information about individual vegetation management projects is located primarily at the district offices, we asked forest supervisors to coordinate the completion of the questionnaire through each forest’s environmental planning coordinator, who is familiar with National Environmental Policy Act (NEPA) requirements. We also asked that the environmental planning coordinator work with other forest officials, such as ranger district officials, to respond to the questionnaire. After developing the questionnaire, we pretested it at the Cibola and Santa Fe National Forests in New Mexico, Humboldt- Toiyabe National Forests in Nevada, George Washington and Jefferson National Forests in Virginia, Monongahela National Forest in West Virginia, and Ouachita National Forest in Arkansas. For this report, we defined a vegetation management project as any project that may include, but is not limited to, activities such as timber sales; salvage sales; and the lopping, dropping, chipping, shredding, burning, masticating, or otherwise thinning of trees, scrub, shrub, grass, weeds, other understory, or brush for multiple purposes. We defined activities as discrete actions or tasks that are intended to accomplish decision objectives. Activities included, for example, stream improvements, precommercial thinning, commercial thinning, slash piling and burning harvest units, timber harvests, and underburning outside harvest units. For each vegetation management project approved, we asked forest officials to identify what type of decision document was used. Decision documents include records of decision for environmental impact statements, decision notices for environmental assessments, and decision memos in the case of categorical exclusions. We also asked Forest Service officials to provide data on the total number of acres proposed for vegetation management or to indicate if the acreage was unknown. We asked forest officials not to double-count acreage when multiple treatments were to occur on the same acreage. In reporting acreage data, the number of acres proposed for treatment may not necessarily correspond to the number of acres treated. For projects approved using categorical exclusions, we asked Forest Service officials to identify the associated acreage proposed for treatment and which of the following five Forest Service Handbook categorical exclusions were used: timber stand or wildlife habitat improvement activities, hazardous fuels reduction activities using prescribed fire not to exceed 4,500 acres and mechanical treatments not to exceed 1,000 acres, harvest of live trees not to exceed 70 acres, salvage of dead or dying trees not to exceed 250 acres, or commercial and noncommercial harvest of trees to control insects or disease not to exceed 250 acres. We asked only for information on the use of these five categorical exclusions. Thus, our evaluation does not include projects that the Forest Service approved using other categorical exclusions which may have included vegetation management activities. For example, the Forest Service has a categorical exclusion for the repair and maintenance of roads, trails, and land-line boundaries that could include vegetation management activities but the primary purpose of the projects is not vegetation management. We also did not include categorical exclusions for regeneration and postfire rehabilitation—both of which could include vegetation management in the form of planting seedlings or trees. These types of activities, however, may have been included in projects that were approved using the five categorical exclusions we evaluated. To determine which Forest Service ranger districts were not using categorical exclusions for managing vegetation and the primary reasons for not doing so, we asked Forest Service officials whether ranger districts within national forests used any of the five categorical exclusions for calendar years 2003 through 2005. If a ranger district had not used one of the five exclusions, we asked the forests to select a primary reason from among a list of reasons that we provided. (If the primary reasons were not included on our list, we also asked the forests to provide other reasons.) In developing our list of reasons, we reviewed the conditions established by the Forest Service that prevented the use of the categorical exclusions. We also pretested the list with Forest Service officials at six national forest units and ranger districts at those locations. While some ranger districts may have had multiple reasons for not using a particular categorical exclusion, we asked Forest Service officials to select the primary reason. We verified the accuracy of the survey responses by randomly selecting about 3 percent of the projects identified by the Forest Service on completed questionnaires. After selecting a project, we requested supporting decision documents—for example, the record of decision for environmental impact statements, decision notices for environmental assessments, or decision memos for projects approved using categorical exclusions—and verified the documents’ information submitted on the questionnaire. In total, we verified information for 84 projects and determined that the data submitted were sufficiently reliable for our purposes. We also examined the data for aberrations such as blank entries and inconsistent responses and, as necessary, contacted the appropriate forest officials for clarification. The data we gathered have some limitations. The information obtained from the national forests was self-reported, and we were unable to independently ensure whether all vegetation management projects approved during the 3-year period were reported. To gauge the accuracy of the number of projects reported, we compared information on Forest Service Schedule of Proposed Actions reports with information submitted on our questionnaire. The action reports, which are prepared quarterly by each of the national forests, summarize activities the forests plan to undertake during the quarter, including proposed activities that have approved decision documents, such as records of decision, decision notices, or decision memos. These reports are available on individual national forest Web sites and generally span at least two quarters. We identified 113 projects that were listed on available quarterly proposed action reports as projects the Forest Service approved using an environmental assessment, environmental impact statement, or a categorical exclusion and that appeared to be for vegetation management but which were not included on the questionnaires. We randomly selected 12 of these projects for follow up with Forest Service officials to determine why. We found that (1) six of the projects were not for vegetation management and thus correctly should not have been included on the questionnaires or in our data, (2) three of the projects were initially excluded but were subsequently added to revised questionnaires and our data as a result of our previous follow-up to clarify other issues, and (3) three of the projects were erroneously omitted from the questionnaires and should have been included in our data but were not. Forest Service officials said the three projects were erroneously omitted because paperwork was misfiled due to an administrative oversight or district office consolidation or because of confusion over whether the project had been approved. Based on this analysis, we found that the data are sufficiently reliable for our reporting purposes. Table 6 lists the 12 national forest units and 23 ranger districts we selected for interviews, using a nonprobability sample, to better determine why categorical exclusions may or may not have been used in approving vegetation management projects. The table also lists the Forest Service regions in which the forests and ranger districts are located, and their geographic location. We conducted our work from September 2005 through August 2006 in accordance with generally accepted government auditing standards. As shown in table 7, the Forest Service has two types of categorical exclusions: those that require the agency to prepare a decision memo for each project approved using a categorical exclusion, and those that do not require such documentation. The Forest Service Handbook provides information on these categorical exclusions, as well as guidelines for preparing decision memos. Acre (in thousand) Acre (in thousand) Acre (in thousand) 4 2 Acre (in thousand) Fewer than 500 acres. 1. We have added language further explaining the Council on Environmental Quality’s role in overseeing agencies’ actions to implement the National Environmental Policy Act. 2. We have added language clarifying that, while decision memos may not include all of the analyses performed by the Forest Service in support of its decisions to use categorical exclusions, agency project files are to include such information. 3. We have added language expanding on the court’s ruling on the nature of projects subject to public notice, comment, and appeal under the Appeals Reform Act. 4. We have added language further clarifying when the Forest Service can use the hazardous fuels reduction categorical exclusion. 5. We have added language specifically identifying the categorical exclusions for regeneration and postfire rehabilitation as ones that were not included in our scope and methodology. In addition to the contact named above, David Bixler (Assistant Director), Nancy Bowser, Rich Johnson, Marcia Brouns McWreath, Matthew Reinhart, Jerry Sandau, Carol Herrnstadt Shulman, and Walter Vance made major contributions to this report.
|
The Forest Service manages over 192 million acres of land, in part through vegetation management projects such as thinning trees. The National Environmental Policy Act (NEPA) requires the Forest Service to prepare either an environmental assessment (EA) or an environmental impact statement (EIS) before approving a project that may significantly affect the environment. The agency generally does not need to prepare such environmental analyses, however, if the project involves categories of activities that it previously found to have no significant environmental effects--activities known as a categorical exclusion. As of 2003, the Forest Service had one categorical exclusion--activities to improve timber stands or wildlife habitat. It has since added four new exclusions, but little is known about their use. GAO was asked to determine, for calendar years 2003 through 2005, (1) how many vegetation management projects the Forest Service approved, including those approved using categorical exclusions; (2) which categorical exclusions the agency used in approving projects; and (3) if field offices are not using categorical exclusions, why. To answer these objectives, GAO surveyed Forest Service officials from all of the 155 national forests. In commenting on a draft of this report, the Forest Service generally agreed with GAO's findings and observations. For calendar years 2003 through 2005, the Forest Service approved 3,018 vegetation management projects to treat about 6.3 million acres. Of these projects, the Forest Service approved about 28 percent using an EA or EIS to treat about 3.4 million acres, while it approved the remainder using categorical exclusions. Although a majority of the projects were approved using categorical exclusions, these projects accounted for less than half of the total treatment acres. The number and size of projects and types of environmental analysis used during the 3-year period varied, depending upon forest size, ecology, and location, according to Forest Service officials. Of nearly 2,200 vegetation management projects approved using categorical exclusions, the Forest Service approved half of them using the categorical exclusion for improving timber stands or wildlife habitat. In approving the remaining projects, the agency primarily used the categorical exclusion for reducing hazardous fuels, followed by those for salvaging dead or dying trees, conducting limited harvests of live trees, and removing trees to control the spread of insects or disease. The projects approved using the categorical exclusion to improve timber stands or wildlife habitat accounted for about 2.4 million of the 2.9 million acres to be treated under projects approved using categorical exclusions. About 11 percent of the Forest Service's 509 field offices had not used any of the five vegetation management categorical exclusions during the 3-year period. The reasons why they had not used specific categorical exclusions varied by office and categorical exclusion. For example, about 91 percent of the field offices had not used the categorical exclusion for the removal of trees to control the spread of insects or disease primarily because they did not have a sufficient number of insect- or disease-infested trees. Similarly, 32 percent of the field offices had not used the categorical exclusion to improve timber stands or wildlife habitat, primarily because no projects of this type had been proposed during the 3-year period.
|
The rising costs of natural hazard events have led many to recognize the benefits of hazard mitigation. Obligations from FEMA’s disaster relief fund grew from $2.8 billion in 1992 to $34.4 billion in 2005 as a result of a series of unusually large events and the increasing federal role in assisting communities and individuals affected by disasters. Given these increasing costs, Congress passed the Disaster Mitigation Act of 2000 (DMA 2000) to establish a national hazard mitigation program to (1) reduce the loss of life and property, human suffering, economic disruption, and resulting disaster assistance costs from natural hazard events and (2) provide a source of predisaster mitigation funding that would assist states and local governments in implementing effective hazard mitigation measures. It also established several initiatives designed to improve state and local hazard mitigation planning—the process these governments use to identify risks and vulnerabilities associated with natural hazards and to develop long-term strategies for protecting people and property in future hazard events. FEMA, within the Department of Homeland Security, is responsible for leading the country’s efforts to prepare for, prevent, respond to, and recover from disasters. In recent years, FEMA has made hazard mitigation a primary goal in its efforts to reduce the long-term effects of natural hazards. For example, FEMA provides guidance for state and local governments to use in developing their hazard mitigation plans, reviews and approves these plans, and administers a number of hazard mitigation grant programs to provide funds to state and local governments to undertake mitigation activities. Table 1 describes FEMA’s hazard mitigation grant programs and their fiscal year 2006 funding levels. FEMA also manages the National Flood Insurance Program (NFIP), which was established by the National Flood Insurance Act of 1968. The NFIP enables property owners in participating communities to purchase flood insurance as protection against flood losses. When a community chooses to join the NFIP, it must adopt and enforce the minimum floodplain management regulations established by the program, which are designed to reduce future flood damages. Currently, over 20,300 communities participate in the NFIP. According to FEMA, it is estimated that $1.2 billion in flood losses are avoided annually because of community implementation of the floodplain management requirements of NFIP. In addition to providing flood insurance and helping to reduce flood damages through floodplain management regulations, the NFIP identifies and maps the nation’s floodplains. These maps help communities identify their flood risks and are used in implementing floodplain management regulations. While FEMA’s hazard mitigation responsibilities span all natural hazards, other federal agencies that participate in hazard mitigation primarily focus their efforts on particular hazards. Hazard mitigation activities conducted by other federal agencies include providing training, disseminating information, and conducting regional assessments. Many federal agencies have responsibilities related to natural hazard mitigation. Some of these agencies include the following: USGS, within the Department of the Interior, is responsible for helping to reduce losses from hazards such as earthquakes, landslides, and volcanic eruptions. USGS provides scientific information that communities can use when developing plans for reducing losses associated with these hazards. Other agencies also rely on USGS information to help them fulfill their responsibilities regarding natural hazards. For example, NOAA’s National Weather Service relies on USGS real-time streamflow information for developing flood forecasts and data from USGS-supported seismic networks as a primary input for tsunami warnings. Five federal agencies—the Forest Service within the Department of Agriculture and the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and the National Park Service within the Department of the Interior—all work to minimize losses resulting from wildland fires. For example, these five agencies work to restore the health of the nation’s forests and grasslands to increase resilience to the effects of wildland fires. NOAA, within the Department of Commerce, focuses on the condition of the oceans and the atmosphere and conducts activities to reduce losses associated with natural hazards such as hurricanes, tornadoes, coastal flooding, and tsunamis. For instance, NOAA’s National Weather Service routinely uses outreach, education, and planning to help communities mitigate these natural hazards. NOAA also works with coastal communities to provide financial, technical, and training support to develop more robust hazard mitigation and land-use plans and improve building code and design standards. The Corps of Engineers builds flood damage reduction projects throughout the country. Typically these projects include levees, flood walls, channels, and small dams that help reduce losses associated with floods. Generally, communities fund a portion of the construction costs of the projects and agree to operate and maintain them. Although FEMA provides leadership for reducing the country’s losses caused by natural hazards, it routinely collaborates with other federal agencies as well as state and local governments, among others. Collaboration is a tool that federal agencies use to work with one another and with various stakeholders, generally through partnerships with state and local governments and communities. In previous work, we identified key practices that could help enhance and sustain federal agency collaboration. These activities include (1) defining and articulating a common outcome; (2) establishing mutually reinforcing or joint strategies; (3) identifying and addressing needs by leveraging resources; (4) agreeing on roles and responsibilities; (5) establishing compatible policies, procedures, and other means of operating across agency boundaries; (6) developing mechanisms to monitor, evaluate, and report on results; (7) reinforcing agency accountability for collaborative efforts; and (8) reinforcing individual accountability for collaborative efforts. Flooding is the most common and destructive hazard facing the nation, but earthquakes, hurricanes, wildland fires, tornadoes, and landslides are also significant risks in certain regions. For example, while floods are potential hazards in most parts of the country, hurricanes are most likely to occur on the Atlantic and Gulf Coasts, and large wildland fires have mostly affected the western United States. The risks caused by natural hazards are exacerbated by the fact that one natural hazard can lead to another. Earthquakes, for instance, can cause tsunamis, landslides, and flooding due to levee failures. In recent years, however, the risk posed by natural hazards has been increasing, fueled by factors that include population trends and the potential effects of climate change. Many hazard-prone regions are experiencing significant population growth, among them the coast of Florida—the most hurricane-prone state in the country—where the population increased by 75 percent between 1980 and 2003. Finally, climate change is potentially increasing the risks faced by some areas by altering the frequency and severity of hurricanes, tornadoes, severe thunderstorms, and wildland fires, and other weather- related events. Several natural hazards such as hurricanes, earthquakes, and wildland fires pose risks to certain areas of the United States. Floods, however, are the most common and destructive hazard in the United States, and all states are likely to experience some degree of flooding. There are many different kinds of floods, including, regional floods, flash floods, floods resulting from dam and levee failures, and storm surge floods. Floods can result in the loss of lives, extensive damage to property and agriculture, and large-scale disruptions to business and infrastructure, such as transportation and water and sewer services. According to our analysis of FEMA data, counties in the Gulf Coast states experienced the greatest concentration of major flood disaster declarations from 1980 through 2005 (fig. 1). Additionally, because flooding is so widespread, it presents risks to a large segment of the population. For example, we found that between 1980 and 2005, approximately 97 percent of the U.S. population lived in a county that experienced at least one declared flood disaster; about 93 percent lived in counties that had experienced two or more flood disaster declarations; and 45 percent lived in counties that had experienced six or more flood disaster declarations. NOAA estimates that floods cause about 140 deaths each year, and the Corps of Engineers estimates floods cost $6 billion in average annual losses. Economic losses continue to rise, in part, due to increased urbanization and coastal development. Hurricanes typically produce violent winds, heavy rains, and storm surges and can result in flooding, coastal erosion, and ecological damage. While Florida has the greatest chance of experiencing a major hurricane (category 3 or higher), our analysis of NOAA data shows that states along the entire Atlantic coast, particularly North Carolina, the Gulf Coast states, and occasionally Hawaii are also at significant risk for hurricanes. Additionally, we found that approximately 29 percent of the U.S. population lived in a county that experienced at least one hurricane from 1980 through 2005. During this same time, counties in eight states— Alabama, Florida, Louisiana, Mississippi, North Carolina, South Carolina, Texas, and Virginia—experienced five or more hurricanes (fig. 2). Before 2005, Hurricane Andrew, which occurred in 1992, was the single most costly hurricane in terms of private insurer losses, causing $22.3 billion in losses (in 2006 dollars). Comparatively, Hurricane Katrina caused $39.3 billion in private insurer losses (in 2006 dollars). Earthquakes are a sudden slipping or movement of a portion of the earth’s crust that releases energy in the form of seismic waves, which can cause shaking and damage over large distances. USGS has estimated that 39 states face significant earthquake risk. Our analysis showed that approximately 41 percent of the U.S. population resided in counties that face medium to high seismic risk. While the risk is concentrated on the West Coast, USGS states that Alaska is the most earthquake-prone state and one of the most seismically active regions in the world, experiencing a magnitude 7 earthquake almost every year and a magnitude 8 or greater earthquake every 14 years (on average). In addition to these areas, the New Madrid seismic zone (which is located in parts of Arkansas, Illinois, Kentucky, Missouri, and Tennessee) also faces medium to high seismic risk (fig. 3). Historically, some of the largest earthquakes in United States have been recorded along the New Madrid fault, and USGS predicts that the region has a 25 to 40 percent chance of experiencing a magnitude 6 or greater earthquake in the area in the next 50 years. Although earthquakes occur with less frequency in the eastern and central United States, according to USGS, a smaller magnitude earthquake in these regions would be just as damaging as a higher magnitude earthquake in the western United States. For example, according to USGS, because of geologic conditions, an earthquake in the east or central part of the country would be felt over a much larger area, and infrastructure in these regions is older and has not been built to withstand earthquake shaking. Similar to a hurricane, a single earthquake can cause great losses. For example, the 1994 earthquake in Northridge, California, caused approximately $59.8 billion in direct losses (in 2006 dollars). FEMA estimates future average annual earthquake losses in the United States at $5.6 billion a year. Wildland fires, which can be triggered by lightning strikes or human activity, play an important ecological role in wildland areas. On average, 100,000 wildland fires are reported each year, but 95 percent are quickly extinguished. Fires that escape initial suppression can grow into large, high-intensity fires that burn quickly and can threaten structures in the wildland-urban interface—the area where structures and other development meet or intermingle with wildlands. According to our analysis, nearly 24 percent of the U.S. population lived in a county where a wildland fire burned over 1,000 acres from 1980 through 2005. Figure 4, which shows the number of these large wildland fires, also shows that they are most likely to occur in western states and Florida. In eight western states—Arizona, California, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming—over 80 percent of the population lived in a county that experienced a wildland fire of over 1,000 acres during the 25- year period we analyzed. According to Forest Service officials, fires less than 1,000 acres can be equally damaging to structures in other parts of the United States, especially in the eastern and southern regions of the country. Additionally, the officials noted that in some western regions of the country, some of the large wildland fires that occur play an important ecological role and may pose less of a threat to life and property because they occur in less populated areas. As we previously reported, wildland fires burned an average of 6.1 million acres per year between 2000 and 2004 and burned an average of about 850 homes each year since 1984. A tornado is a violently rotating column of air extending from a thunderstorm to the ground. The most violent tornadoes are capable of tremendous destruction, with damage paths as wide as a mile and as long as 50 miles. In an average year, about 1,000 tornadoes are reported across the United States. While tornadoes have been documented in every state, NOAA data show that the central states are most likely to experience the most severe tornadoes—those with wind speeds of 158 miles per hour or greater. “Tornado Alley,” an area covering a stretch of land from central Texas to northern Iowa and from central Kansas and Nebraska to western Ohio, has the highest tornado activity in the nation (fig. 5). Another significant zone of tornado frequency is the central southeast United States, including Louisiana, Mississippi, Alabama, and Tennessee. From 1980 through 2004, five states—Alabama, Arkansas, Kansas, Oklahoma, and Texas—each had one county that experienced five or more severe tornadoes. Tornadoes pose a significant risk to life, causing an average of 80 deaths and over 1,500 injuries a year. Tornadoes can also be costly. For example, NOAA estimates that approximately once per decade, a devastating tornado in the United States has caused $1 billion or more in damages. Landslides are the movement of a mass of rock, debris, or earth down a slope and can range from a rapidly moving rock avalanche to a more slowly moving earth slide and ground failure. The greatest landslide damage occurs in the Appalachian and Rocky Mountains, as well as the Pacific Coast regions, but USGS data show that all 50 states can experience landslides and other ground-failure problems (fig. 6). We found that from 1980 through 2005, approximately 66 percent of the U.S. population lived in an area where the landslide risk was moderate to high. Landslides can have a significant adverse effect on infrastructure and threaten transportation corridors, fuel and energy conduits, and communications linkages. USGS estimates that landslides cause, on average, $3.5 billion in damage repair and between 25 and 50 deaths a year. Other hazards also present risk to portions of the United States. Some of these hazards, including thunderstorms, extreme heat, and winter storms can occur in most areas of the country. Tsunamis—a series of long waves generated by any large-scale disturbance of the sea—can occur in all U.S. coastal regions, but according to NOAA, the west coast, Alaska, and Hawaii are the most vulnerable. Although less frequent than other hazards in the United States, tsunamis are a significant natural hazard with great destructive potential. For example a 1964 Alaska tsunami led to 110 deaths, some as far away as Crescent City, California. In addition, according to USGS, in the past few hundred years volcanoes have erupted in Alaska, California, Hawaii, Oregon, and Washington. Since 1980, 45 eruptions and 15 cases of notable volcanic unrest have occurred at 33 U.S. volcanoes. In addition to the risk that an individual hazard poses, some hazards present multiple risks because they can cause another hazard to occur. For example, hurricanes often produce torrential rain that, in addition to causing floods, can trigger landslides or breach levees. Hurricanes can also damage trees in wildland areas, increasing wildland fire risk in these areas by creating fuel accumulation. Earthquakes can cause tsunamis, landslides, and flooding (e.g., due to levee failures). For example, the devastating December 2004 Indian Ocean tsunami was triggered by an earthquake. In addition, drought can contribute to wildland fires, which can induce other hazards, including floods and landslides. The degradation of soil in an area burned by a wildland fire prevents vegetation from growing back, including features that would hold the soil in place during heavy rains. Consequently, landslides are more likely to occur in burned areas. Population growth in hazard-prone areas and the resulting increase in development in these areas are increasing the vulnerability of the nation to losses resulting from natural hazards. According to a study conducted by NOAA, coastal areas are among the most rapidly growing and developed areas in the nation, with a large percentage of the U.S. population living in coastal counties. These areas are susceptible to hurricanes, earthquakes, flooding, and other natural hazards. For example, the coastal population in Florida grew by 7.1 million people, a 75 percent increase, from 1980 to 2003. According to the study, Florida led all coastal states in issuing building permits for single- and multifamily housing units in coastal counties from 1999 to 2003. Additionally, the number of people living on the California coast grew by almost 10 million between 1980 and 2003, putting more people and property at risk from earthquake damage. Los Angeles County experienced the greatest increase in population of all coastal counties from 1980 to 2003. A study on the potential damage that an earthquake could cause in downtown Los Angeles found that damages from such an event would likely fall between $82 billion and $252 billion. Other areas prone to natural hazards are also experiencing significant population growth and development. For example, many of the fastest-growing areas in the United States are in the wildland-urban interface, and development in these areas increases the threat of wildland fires. Experts estimate that between 1990 and 2000, 60 percent of all new housing units in the United States were built in the wildland-urban interface, and that in by 2000 about 38 percent of housing units overall were located in these areas. Additionally, urban growth in tornado-prone areas, which in many cases were previously sparsely populated, is increasing the chances that a tornado will hit a heavily developed area. For example, in February 2007, a series of tornadoes damaged over 1,500 homes in 4 central Florida counties, 2 of which have been among the 100 fastest-growing counties in the nation in recent years. Further, as we have previously reported, key scientific assessments indicate that climate change is expected to alter the frequency or severity of weather-related natural hazards themselves, increasing the nation’s vulnerability to such hazards. Global temperatures have increased in the last 100 years and are projected to continue to rise over the next century. Scientific assessments suggest that the potential effects of climate change on weather-related events could be significant. For example, increasing temperatures may impact communities by altering the frequency or severity of hurricanes, tornadoes, severe thunderstorms, and wildland fires. For example, Forest Service officials told us that effects of climate change, such as drought, can increase the risk of wildland fires, especially east of the Mississippi River because of the high density of vegetation and population. We also reported that experts found that global sea levels rose several centimeters during the past century, potentially increasing the magnitude of hurricane storm surges in some areas. Rising sea levels can also increase coastal inundation and erosion in low-lying areas, resulting in property losses. Hazard mitigation planning activities help communities identify risks from natural hazards and develop mitigation strategies to reduce these risks. The strategies can be implemented through land-use planning tools such as the acquisition of hazard-prone land and development regulations that provide a way to reduce vulnerability over the long term. Building codes and design standards also can be used to help reduce losses from natural hazards by creating structures that are better able to withstand a hazard event. State and local building codes can be designed to reflect communities’ hazard risks and can specify more rigorous requirements to address these hazards. Additionally, design, construction, and landscaping features can be included in structures built in hazard-prone areas. For example, construction features such as hurricane straps, which provide extra support in connecting the roof to a building, can help reduce damages during hurricanes. Finally, hazard control structures such as levees, dams, and floodwalls can help protect existing at-risk developments from flood losses. The best time for communities to take steps to address their natural hazard risks is before a disaster occurs. Hazard mitigation planning, which occurs at the state and local level, helps communities assess their natural hazards risks and develop mitigation strategies. The process typically involves a range of stakeholders, including neighborhood and environmental groups, local businesses, and others. The involvement of stakeholders is an important component to the planning process because it assists in identifying the most vulnerable populations and facilities in the community and in creating community support to implement the plan. The assessment can include gathering information on the types, locations, and potential extent of natural hazards and the types and numbers of buildings, infrastructure, and critical facilities located in hazard areas. Finally, based on a community’s assessment of its risks, stakeholders can identify mitigation goals and objectives. As a condition for receiving hazard mitigation assistance, states and local communities must develop hazard mitigation plans and have FEMA approve them. According to FEMA, all 50 states have approved plans, and approximately 60 percent of the U.S population lives in communities with approved local mitigation plans. One county emergency management official with whom we spoke said that a local mitigation plan is an important component of a community’s mitigation program. He noted that developing such a plan requires examining other local plans (e.g., community development and capital improvement plans) to ensure that mitigation goals and objectives are consistent with other community goals. Incorporating elements of communities’ hazard mitigation plans into community development plans can facilitate the implementation of hazard mitigation goals. A land-use planning expert told us that incorporating mitigation plans into other long-term strategies not only helps with implementation but also can prevent long-term mitigation objectives from being overlooked when communities develop other short-term objectives. Additionally, a state emergency management department official told us that local mitigation plans are particularly important because they establish a consistent long-term hazard mitigation approach for local governments to take that survives the high staff turnover rates local governments often face. Communities’ development and other plans can be implemented through land-use planning tools and development regulations that provide a way to reduce vulnerability to natural hazards over the long-term. For example, communities can acquire hazard-prone land and retain it as open space in order to limit development in the most at-risk areas, particularly in floodplains and coastal zones. Acquiring flood-prone properties permanently eliminates losses from properties that flood repeatedly. Communities can also use zoning to designate how land will be used, control such features as building density and lot sizes, and restrict building in hazardous areas through the use of setbacks—minimum distances between development and hazardous areas. For example, coastal zone management regulations can impose setbacks to control construction near the coast. Another method of limiting development in hazard-prone areas is the process of subdivision that divides a large lot into any number of smaller lots as a means of facilitating development. “Clustering,” for instance, allows developers to build the same number of units on their land by placing more buildings on the less hazardous areas and limiting development in the more hazardous areas. Communities also use other types of planning, such as capital improvement planning, which guides decisions on investing in new infrastructure and repairing and replacing existing infrastructure. Capital improvement planning can prevent damage to infrastructure by making sure it is not built in hazard-prone areas and requiring that existing infrastructure located in such areas be strengthened to provide additional resilience during natural hazards. Capital improvement plans can include activities such as raising bridge heights in flood-prone areas and improving the seismic strength of buildings at risk from earthquakes. Additionally, these plans can be used to guide development away from hazard-prone areas by, for example, not extending water and sewer lines and other utility services into these areas. California’s history of earthquakes has focused attention on the need to strengthen the state’s infrastructure against seismic risks. A seismic safety expert estimated that between 1989 and 2006, approximately $15 billion was spent on seismic improvements for utilities and transportation systems in the San Francisco Bay area. Some of these capital improvement examples include the following: The California Department of Transportation has rebuilt or retrofitted most of the major roadway bridges in the San Francisco Bay area. The Bay Area Rapid Transit system is currently undergoing a major seismic retrofit of its entire system. Seismic improvements have also been made for gas, electric, and water systems. Building codes, the minimum acceptable standards that are used to regulate the design and construction of the built environment, play an important role in improving the resilience of structures to natural hazards. Because states and localities have the authority to adopt building codes, these codes vary throughout the country. Some states choose to adopt statewide building codes that can help ensure a minimum level of building quality. However, statewide building codes do not necessarily apply to all structures—for example, they may apply only to state-owned buildings, schools, or other public buildings. In Iowa, statewide building codes apply only to structures built with state funds or owned or leased by the state. Additionally, states may give local communities the right to opt out of a statewide code and adopt a local building code. Many states and localities base their codes on model building codes that are developed on a national level by groups made up of building industry and other professionals. These codes reflect a consensus among building experts on the appropriate level of protection that codes should provide. Model codes incorporate disaster-resistant standards for hazards such as wind, earthquakes, floods, and wildland fires and are specific to the type of structure being built (e.g., new commercial and residential buildings, existing buildings that undergo renovation or alteration, and structures built in wildland-urban interface areas). As of January 2007, the majority of states had adopted some version of a model building code for commercial and residential structures. Additionally, some local jurisdictions within states that have not adopted a statewide model code have adopted model codes on their own. However, according to an insurance services company that assesses the effectiveness of communities’ building code enforcement throughout the country, there are about 5,000 communities throughout the United States that have not adopted building codes. Model building codes can be modified by state and local authorities to reflect local hazard risks and can require more rigorous requirements to address these hazards. For example, in the hurricane-prone state of Florida, the Florida Building Code requires that structures built in areas vulnerable to high winds have windows and glass doors that are designed to withstand the impact of wind-borne debris or mandates the use of shatter-resistant glass or shutters. The California Building Code incorporates, among other things, specific seismic requirements to make structures more resilient to earthquakes and requirements for fire-resistant roofing, windows, and building exteriors for structures in wildland-urban interface areas. Building officials, mitigation experts, and industry groups all commented that enforcing building codes is critical in order to effectively mitigate natural hazard losses. Studies revealed that damage from the 1994 Northridge earthquake would have been reduced if the seismic provisions of building codes had been properly enforced. Reports following Hurricane Andrew in 1992 also found that inadequate code enforcement resulted in significant losses from the hurricane. Enforcement of building codes generally occurs at the local building department level and ensures that builders comply with the standards specified in the codes so that structures provide the level of protection for which they were designed. Enforcement includes activities such as approving permits for new structures or structures undergoing renovation, reviewing construction plans for compliance with the building code, and inspecting construction sites to ensure that construction is proceeding according to the reviewed plan. When a community adopts and enforces revised building codes designed to improve structural integrity, losses from natural hazard events can be reduced. State and local building code and other local government officials told us that structures built to newer building code standards performed better during natural hazard events than those built to earlier standards. For example, building code officials in California explained that when reviewing the damage from the Northridge Earthquake, they found that older buildings suffered substantially more damage than newer buildings built using seismic mitigation measures. Figure 7 shows the damage resulting from Hurricane Charley in 2004 to two structures in Florida that are located across the street from one another. The structure on the left, which is an older building, was completely destroyed, while the structure on the right, whose construction was subject to a recent building code, performed well during the storm. Specific construction, design, and landscaping features can be incorporated into structures built in hazard-prone areas to improve their ability to withstand a natural hazard event. For example, specific construction features such as hurricane straps, which provide extra support in connecting the roof to a building, in areas subject to hurricane- level winds, can help reduce damages during hurricanes (fig. 8). For homes built in wildland-urban interface areas, landscaping techniques can be applied around the perimeter of a structure. By managing the vegetation and reducing or eliminating flammable materials within 30 to 100 feet of a structure, property owners and developers can create a defensible space that substantially reduces the likelihood that a wildland fire will damage or destroy the structure (fig. 9). Existing structures can also be made more resistant to natural hazards through retrofitting, or modifying a structure to improve its resistance to hazards. While retrofitting may not bring a structure up to the most recent building code standards, it will help existing structures better withstand natural hazard events. Retrofitting techniques exist for a number of natural hazards, such as hurricanes, earthquakes, floods, and wildland fires. For example, garage doors are vulnerable to hurricane winds because of their size and the strength of the materials used to construct them. If a garage door fails during a storm, it can lead to more severe damages to a home, especially to the roof. However, these doors can be reinforced with horizontal or vertical bracing. Additionally, homes can be retrofitted by anchoring the structure to its foundation, reducing the possibility that the house will move off its foundation during an earthquake or hurricane. Hazard control structures such as levees, dams, and floodwalls provide protection in flood-prone areas and can reduce associated losses. These structures are typically used to protect existing at-risk developments, such as buildings located in floodplains, and provide a certain level of flood protection. They may not provide absolute flood protection, however, because a flood could exceed the intended level of protection, as Hurricane Katrina’s storm surge did, allowing floodwater to breach the levees and floodwalls in New Orleans. However, flood control structures can prevent extensive damage in many cases. For example, the city of Napa developed a flood protection project that incorporates several flood mitigation activities and a combination of hazard control structures, including levees, floodwalls, and other structures, to achieve a 100-year flood protection level. The project is expected to save $26 million annually in flood damage costs when it is completed. According to a city of Napa official, had the project been completed it would have prevented all flood damage that occurred from the flood on New Year’s Eve in 2005. Protecting, restoring, and enhancing natural protective features such as floodplains, wetlands, beaches, dunes, and natural drainage ways can also help mitigate a community’s vulnerability to damage from storms and associated flooding. Floodplains and wetlands, for instance, serve as natural buffers, absorbing excess rainfall and limiting the effects of floods on the built environment. Coastal wetlands can absorb storm surge, while beaches and dunes provide physical protection from storm surge. Over time, some of these natural storm protection features have suffered damages and losses as a result of development pressures. A number of communities have adopted policies designed to protect these natural protective features. For example, federal, state, and local government resources have been spent in Florida to restore and enhance these natural protective features, including beach and dune restoration. Federal, state, and local governments provide a variety of financial and other assistance to encourage natural hazard mitigation activities. For example, at the federal level, FEMA offers assistance to states and local communities through grant programs such as the Hazard Mitigation Grant Program (HMGP). At the local level, communities can use economic incentives such as tax benefits to encourage mitigation activities. Insurance discounts can also encourage communities and individuals to undertake mitigation measures. However, despite these methods of encouraging mitigation, several impediments exist to implementing mitigation activities. For example, mitigation efforts are often constrained by conflicting local interests, cost concerns, and a lack of public awareness of the risks of natural hazards and the importance of mitigation. Federal, state, and local agencies are taking steps to provide direct assistance to some communities to reduce losses from natural hazards although not all communities have the means to take full advantage of this assistance. This assistance can help communities overcome some of the impediments they face in undertaking mitigation activities by, for example, providing funding to assist in implementing mitigation activities and offering incentives to encourage mitigation activities. At the federal level, FEMA provides funding and technical assistance to help communities reduce losses from natural hazards. To provide states with an incentive to undertake more proactive mitigation activities, DMA 2000 authorized the grant of additional HMGP funds to states where a disaster area is declared if the state has prepared a more advanced hazard mitigation plan. States that demonstrate that they have integrated their hazard mitigation plans with other state or regional planning (e.g., comprehensive and capital improvement plans); effectively administer, implement, and assess existing mitigation programs; and are committed to a comprehensive state mitigation program receive additional funding to conduct mitigation activities. According to FEMA officials, as of May 2007, only 11 states had completed advanced mitigation plans and were eligible for this additional funding. With the exception of the flood mitigation grant programs, FEMA’s grant programs generally do not specify the hazards that communities must mitigate or the types of activities they must undertake but instead leave these decisions to local communities. For example, in Oklahoma, state officials decided to focus their attention on saving lives during tornado events and developed the Safe Room Program. Using FEMA HMGP funds from a tornado event in 1999, the state offered refunds of up to $2,000 for home owners who built safe rooms in their homes. Some local community hazard mitigation officials with whom we met, however, said that the HMGP application process is complex and time and resource intensive, and that long delays can occur in receiving mitigation funds. Delays in receiving grant funds can lead to additional obstacles for local communities. One local mitigation official told us that delays in receiving grant funds prevents the city from being more cost-effective in terms of mitigation. She stated that it would be most effective to conduct mitigation activities immediately after a storm event, when damages are being repaired, rather than waiting for HMGP funds to become available. According to FEMA, while states have up to 1 year from the date of a disaster declaration to apply for HMGP funds, the approval process can begin much earlier following a disaster if state and local officials have previously identified viable mitigation projects that are consistent with state and local mitigation plans. Although mitigation grant funds may be available to communities, not all communities are able to capitalize on these opportunities. For example, most of FEMA’s grant programs fund up to 75 percent of the mitigation project costs and require local communities to produce the remainder of the funds needed for mitigation projects. Oklahoma state emergency management officials with whom we met noted that although local communities might have several mitigation programs available to them, often, communities do not have the resources needed to provide their share of the cost. The officials further commented that this problem tends to affect many of the smaller communities in the state and that these communities should be careful not to commit themselves to too many mitigation projects. FEMA also offers support to communities by providing technical assistance on hazard mitigation, offering guidance on how communities can develop hazard mitigation plans and identify the areas most at risk from hazards. For example, FEMA developed and provides training on a loss estimation software program (i.e., HAZUS-MH) that analyzes potential losses caused by floods, hurricanes, and earthquakes that communities use to determine where to focus their mitigation efforts. FEMA also provides information directly to help residents and business owners choose the type of flood insurance policy that best suits their needs through its FloodSmart Web site and marketing program aimed at increasing flood insurance coverage nationwide. In addition, FEMA provides multihazard design, construction, and retrofit guidance at no cost for various stakeholders, including design professionals, local officials, homebuilders, home owners, and other building owners. A number of other federal agencies assist communities in reducing their risk to natural hazards. These agencies generally focus their programs on a specific hazard or hazardous area and work with communities to reduce their natural hazard risks. For example, at the federal level, five wildland fire management agencies work to manage losses resulting from wildland fires by providing grants or other kinds of assistance to help reduce fuels on private land. Through grant programs, these agencies provide funding to state forestry agencies and local fire departments for equipment, training, risk assessment, fire prevention work, and public information and education activities. Similarly, NOAA assists U.S. coastal states through financial and other types of assistance to protect the nation’s coastal communities. By partnering with states and local authorities, NOAA helps communities conduct coastal hazards planning and administer state or local land-use programs that guide more prudent development in hazardous coastal areas. Other federal agencies offer a number of programs that can be used to address communities’ natural hazard mitigation needs. For example, the Secretary of the Department of Housing and Urban Development has flexibility to use Community Development Block Grant program funds when available to assist communities recovering in presidentially declared disaster areas. These activities can include the acquisition and reconstruction of properties damaged by a natural hazard event. State and local governments often have their own programs to promote mitigation that can operate alongside federal programs, including direct subsidies for mitigation activities and services that promote mitigation. Because state and local governments determine the types of programs they implement, the programs can be tailored to focus on a specific local hazard. Examples from communities we visited include the following: The Florida Department of Financial Services operates the My Safe Florida Home Program to help Florida residents identify ways to strengthen their homes to reduce damages from hurricanes. The program offers a free home inspection to home owners that meet income and other eligibility requirements to help them identify appropriate mitigation techniques and provides matching grants of up to $5,000 to make the recommended mitigation improvements. The city of Berkeley, California, encourages private property owners to conduct seismic retrofit activities by allowing property owners to use a portion of the transfer tax on the sale of a property to fund seismic retrofit work. If owners choose not to use this portion of the tax to fund retrofit activities for their property, this portion goes to the city. The city also subsidizes mitigation by waiving building permit fees on seismic retrofit projects. The Boulder County Land Use Department assists home owners’ associations by providing grants to conduct fuel management in neighborhoods that are at high risk from wildland fires. The grant recipients reduce their wildland fire risk by cutting tree limbs and clearing other debris from their properties, and the waste is chipped and used to heat county office buildings. Insurance premium discounts can promote mitigation by rewarding property owners for actions they take to reduce the effects of natural hazards. At the federal level, the NFIP Community Rating System (CRS) encourages communities to reduce their flood risks by engaging in floodplain management activities. CRS provides discounts on flood insurance for individuals in communities that establish floodplain management programs that go beyond the minimum requirements of NFIP. Depending on the level of activities that communities undertake in four areas—public information, mapping and regulatory activities, flood damage reduction, and flood preparedness—communities are categorized into 1 of 10 CRS classes. A Class 1 rating provides the largest flood insurance premium reduction (45 percent) to communities, while a community with a Class 10 rating receives no insurance premium reduction. Mitigation officials with whom we spoke said they believe that the CRS insurance discounts are an effective means of encouraging communities that participate in NFIP to undertake more aggressive flood mitigation. For example, an official from the Palm Beach County Division of Emergency Management noted that the county’s CRS rating of 6 entitles flood insurance policyholders in all 37 jurisdictions in the county to a 25 percent reduction in their flood insurance premiums. A city of Napa official said that one of the goals of the Napa River Flood Protection Project is to improve the city of Napa’s CRS rating from a Class 7 to a Class 5—a change that would increase the flood insurance policyholder discount by an additional 10 percent. Although these discounts are available, less than 5 percent of the communities participating in NFIP participate in the CRS program. Furthermore, CRS classes 1 through 4 each contain only one community. Of these four communities, Roseville, California has a Class 1 rating and is the only community in the United States eligible for the maximum flood insurance premium discounts of 45 percent. According to FEMA, approximately 1,055 communities will have flood insurance discounts beginning October 1, 2007, which represents about two-thirds of NFIP flood insurance policies. States and communities can also provide opportunities for property owners to receive insurance premium discounts by participating in the Building Code Effectiveness Grading Schedule (BCEGS™) program, which was developed by ISO. Through the program, communities are assessed according to the building codes adopted in a community, amendments to the code, and how well the codes are enforced. The BCEGS™ program places particular emphasis on reducing losses caused by natural hazards, especially losses caused by hurricanes, tornadoes, and earthquakes. Once assessed, communities receive a BCEGS™ classification, which is provided to insurers to use as an underwriting tool. Insurance companies can voluntarily opt to use this information to offer rate discounts to property owners that live in these communities. According to the officials who developed the program, however, data are not available on the extent to which it is being used as an underwriting tool. The officials also commented that they do not believe many insurance companies are using it for this purpose. Some states also use insurance discounts to promote mitigation. In Florida, private insurance companies are required by law to offer a discount for structures that incorporate wind mitigation components. In California, state law requires the California Earthquake Authority (CEA)— a privately financed but publicly managed state agency—to offer a 5- percent discount on retrofitted homes that were built before 1979 and that meet other specifications. However, according to information provided by CEA, only about 12 percent of California residents have earthquake insurance. In addition, the CEA Mitigation Program Coordinator stated that it is unclear to what extent insurance premium discounts are an incentive to encourage individual homeowners to undertake earthquake mitigation activities. Also, city officials whom we met with in Florida said that discounts are not very effective for creating incentives for home owners because of the increasing insurance premiums in that state. For example, according to the Florida Financial Services Commission, the largest private insurer in Florida increased its rates by 66 percent in 2006. Individuals and communities must understand the hazards that pose a risk to them and the options for reducing those risks in order to make informed decisions not only about mitigation but also about where to live, purchase property, or locate a business or critical facility. Several state and local officials told us that individuals are often unaware of the risks they face. For example, one county mitigation official in Florida explained that the state’s population continues to grow and that most of the new residents were unfamiliar with the state’s hazard risks and mitigation options because they come from out of state. Public education and training campaigns help to ensure that communities and individuals receive adequate information on the hazards they face as well as the options for reducing their risk. Education and outreach programs are valuable components of mitigation programs and can take many forms, including distributing educational materials to individuals, organizing community events that discuss mitigation options, and incorporating hazard information into school curriculums. A number of entities conduct education campaigns on natural hazards to a variety of audiences—the public, home owners, business owners, builders, and developers. For example, the Firewise Communities program, which is made up of nongovernmental organizations and federal agencies, educates home owners about steps they can take to protect their homes from wildland fires and state and local officials about steps they can take to help educate home owners. The program is also used to educate developers who are building homes in the wildland-urban interface about the various landscaping and other mitigation features that can be incorporated into developments to help reduce the risk of damage due to wildland fires. In addition to large national programs, we observed a variety of different public education campaigns at the state and local level during our field work. For example: The city of Deerfield Beach, Florida, created a nonprofit organization to educate city residents on how to mitigate hurricane risks. The nonprofit is based in the Disaster Survival House, a home that was built by a major insurance company and donated to the city to show how a house can be built to withstand a catastrophic hurricane. The house serves as an educational center for schoolchildren and the public and as a showcase of building techniques and mitigation measures for builders and home owners. Tulsa, Oklahoma, conducts an annual public outreach campaign using information displays and brochures that are placed throughout the area in fast food restaurants. The brochures outline hazards that pose a risk to the community, such as tornadoes, floods, and wildland fires and provide information on how individuals can protect themselves and their property. When communities take actions to increase public awareness of the hazards citizens face and the options available to reduce them, communities may be more likely to take progressive actions to solve hazard problems. For example, when citizens in Napa, California, were educated about the flood hazard in the community and the options being proposed to address the risk, the community voted to increase the sales tax to fund the local portion of a flood mitigation project. The city of Berkeley, California—another community that has undertaken considerable public education and outreach efforts—has the highest percentage of seismically retrofitted buildings in the San Francisco Bay area. The city has also passed a number of bond initiatives to fund mitigation activities and has been successful in recruiting residents to assist in promoting mitigation activities. However, public awareness alone cannot always overcome some of the difficulties communities have in promoting mitigation activities such as lacking the necessary funding to undertake mitigation activities and the perception that individuals may have that a disaster will not happen in their community. Hazard mitigation goals and local economic interests often conflict, and the resulting tension can often have a profound effect on mitigation efforts. As we have previously reported, local governments may be reluctant to take actions to mitigate natural hazards for a number of reasons, such as local sensitivity to such measures as building code enforcement and land-use planning and the conflict between hazard mitigation and development goals. For example, community goals such as building housing and promoting economic development may be higher priorities than formulating mitigation regulations that may include restrictive development regulations and more stringent building codes. In particular, local government officials we contacted commented that developers often want to increase growth in hazard-prone areas (e.g., along the coast or in floodplains) to support economic development. These areas are often desirable for residences and businesses, and such development increases local tax revenues but is generally in conflict with mitigation goals. For instance, during our visit to Tulsa, Oklahoma—a community that has repeatedly experienced dangerous floods—local officials expressed their opposition to a project proposed by developers to construct an island in the Arkansas River. The proposed project would create a 40-acre man-made island with residential and commercial development in the river. According to city officials, this development would be downstream from the Keystone Dam, which in the past has had to release water that has resulted in flooding downstream, and the proposed project would be located in an area that is vulnerable to such flooding. The Tulsa officials said that this project highlights the conflict between economic development and mitigation efforts, as developers are promoting the project as economic development for the city, while emergency management officials are not in favor of the project due to the potential for damage to the proposed islands and other properties downstream. Land-use planning experts told us that the short-term perspective of some local elected officials can conflict with long-term community efforts such as limiting growth in hazard-prone areas or adopting strong building codes. Political pressures can also play a large role in communities’ choice of mitigation activities. National building code officials stated that in some communities, exemptions and variances to existing building codes are made because of political pressure. For example, mitigation experts commented that because of political pressures in Florida, counties located in the Panhandle were originally exempt from stricter statewide building codes for hurricane protection. The exemption was removed from law at the end of the 2006 Florida Legislative session, and buildings in these counties now have to comply with the more stringent hurricane protection requirements of the Florida Building Code. Additionally, in some communities political support for implementing mitigation activities is lacking. For example, during our field work in Colorado, officials told us that while some communities in the state have adopted model building codes, many jurisdictions are “home rule” communities that often resist federal and state regulations, which local citizens view as government intervention. Federal, state, and local officials all cited the importance of political support in implementing mitigation actions and, said that without political support, the amount of mitigation activities that occur would be limited. Local communities may encounter difficulties in implementing and maintaining mitigation-related policies due to cost concerns. Local communities can incur large expenses in implementing certain mandatory mitigation requirements, such as hazard mapping, land-use planning, and local ordinances to address natural hazard risks. For example, the California Seismic Hazards Mapping Act requires cities and counties to use seismic hazard zone maps in their land-use and building permit process. However, according to a 2005 American Planning Association report on landslide hazards and planning, local planning and building officials have been apprehensive about the financial costs of compliance, which requires the use of hazard maps, regional and site-specific hazard assessments, and amendments to local regulations. Additionally, maintaining mitigation- related policies can be difficult for communities because of the costs and resources involved. For example, the process of updating local building codes is resource intensive, and although newer codes may provide better protection from natural hazards, local communities may choose not to adopt them because of the associated expenses (i.e., the adoption and implementation process and the training of building code officials and inspectors on the updated code). Further, information on local natural hazard risks may need to be updated periodically, a process that can be time consuming and expensive. The Oklahoma Water Resources Board floodplain manager told us that updating floodplain maps to reflect changes in local development is expensive because it could require hiring outside engineering contractors. Financial constraints may also limit communities’ decisions to eliminate or limit development in hazard-prone areas. For example, an effective way for communities to eliminate development in high-risk areas is to acquire land and retain it for open space. However, property acquisition is expensive and can require long-range planning, multiple funding sources, and political support. Communities, particularly those dependent on new development for economic growth, can also face resistance to limiting the amount of development that is allowed to occur in hazard-prone areas and may be hesitant to imposing strong mitigation requirements. For example, implementing density restrictions that reduce the amount of development that can occur in a hazard-prone area can result in a perceived or real decrease in the value of land and make the area less attractive for development. Private property owners are also influenced by cost considerations when deciding whether to implement hazard mitigation. For example, many home owners may be reluctant to pay for the additional costs of features that exceed local building codes, such as reinforced concrete walls, fire- resistant building materials, and flood-proofing features, all which add to the cost of building a home. According to building experts, for most home owners and potential home buyers cost is the primary factor in deciding whether to include mitigation features in new or existing homes. Officials from the National Association of Homebuilders told us that the economic cost of mitigation measures should be considered, because every $1,000 increase in median home prices can price about 240,000 home buyers out of the market. During our field work in Lehigh Acres, Florida, officials from the Institute for Business & Home Safety (IBHS) told us that not all new home buyers were willing to spend the additional costs for incorporating mitigation measures, especially first-time buyers. IBHS has developed standards for building hurricane-resistant homes. According to IBHS officials, incorporating these standards can add about 10 to 15 percent to the total cost of building a home. The officials also added that the fact that appraisers often do not include the added costs of mitigation features into the appraised home value is another impediment to mitigation that needs to be addressed. FEMA officials pointed out that, in addition to the cost of mitigation features, the benefits they provide should be communicated to individuals when they purchase a home. For existing buildings, the high cost of retrofitting has also been cited as an impediment to implementing mitigation measures. In 1986, California enacted a law that required local governments in high seismic regions to inventory unreinforced masonry buildings, that were known to perform poorly during earthquakes and to establish a program for reducing losses from these buildings. According to an estimate prepared by a California Seismic Safety Commission structural engineer, about two-thirds of over 25,000 unreinforced masonry buildings that have been inventoried in California have been retrofitted or demolished. However, about 8,000 buildings in high seismic regions have not been retrofitted, primarily because of the high cost of retrofitting. For example, the cost to retrofit an average-size 10,000-square-foot building is about $400,000. As a result, some buildings that do not generate sufficient income to pay for the cost of retrofitting have been left vacant. Further, a study that assessed the risks and losses of potential earthquakes in the New York, New Jersey, and Connecticut region determined that retrofitting thousands of buildings in New York would be “impractical and economically unrealistic.” This decision was made despite the fact that New York City faces moderate seismic risk and contains a large number of unreinforced masonry buildings used primarily as housing or for commercial purposes. In 1995, New York City passed its first seismic building code, which will help to ensure that new construction meets these standards. However, because these standards do not apply to buildings built prior to 1995, even a moderate earthquake could cause much damage to the existing building stock. Building code officials and others with whom we spoke told us that improvements are needed to address the lack of rigorous enforcement of building codes in the United States. According to ISO officials, of approximately 19,000 communities assessed through the BCEGSTM program, only 5 communities have received the highest classification that indicates exemplary commitment to building code enforcement. The ISO officials also commented that building departments in most of the communities they review conduct more inspections per day than is feasible to provide rigorous code enforcement. National building code officials told us that many local building departments do not have the adequate funds and staffing levels to conduct proper code enforcement. Additionally, they commented that low funding levels can affect the amount of training local building inspectors receive and thereby reduce their ability to enforce the code. Efforts to adopt new mitigation activities and strategies have been constrained by the general public’s lack of awareness and understanding about natural hazards and risk. Individuals often also have a misperception that natural hazard events will not occur in their community and are not interested in learning of the likelihood of an event occurring. For example, in California—where public perceptions of natural hazard risk are high—some mitigation measures have been implemented, such as strengthening transit systems, bridges, and highways. However, in other parts of the country, where seismic risk is high but damaging earthquakes occur less frequently (e.g., New Madrid seismic zone), public awareness of the risk is lower, and fewer mitigation measures are in place. Additionally, land-use experts and mitigation officials told us that it is often difficult for the public to perceive natural hazard risk or believe that a natural hazard event will occur in their community. However, public skepticism is significantly reduced immediately following natural hazard events, and mitigation activities are often conducted during such periods—for example, the adoption of more stringent building codes after Hurricane Katrina and the seismic retrofitting requirements approved after major earthquakes in California. Limited public awareness may also be a result of the complexity of the information that is needed for individuals to understand their hazard risks. Local community decision makers may not fully understand the science involved in predicting the probability of natural hazard events such as earthquakes, making it difficult for a community to develop appropriate mitigation plans. For example, USGS officials cited the complexity of geologic science as a challenge to communicating information on hazards. The officials also said that the ability of decision makers to develop mitigation strategies for their communities depended on the availability of appropriate and easily understandable information. As a result, programs to improve public awareness and education are long-term and require sustained effort. Collaboration among federal, state, and local agencies as well as nongovernmental stakeholders on natural hazard mitigation efforts tends to occur on a hazard-specific basis, typically after a disaster, or through informal methods. These efforts include developing national mitigation strategies or interagency programs dedicated to reducing losses from particular natural hazards. In addition, as a way to promote collaboration among all mitigation stakeholders, the federal government develops partnerships with state and local governments, professional associations, nongovernmental groups, businesses, academia, and individual community members—partnerships that are critical to the success of any mitigation program. Although the current approach includes some key practices on collaboration, it is fragmented and does not provide a comprehensive strategic framework that combines both pre- and postdisaster mitigation activities. Without such a framework, the federal government may not be effectively identifying and managing all natural hazard risks nationwide. Mitigation efforts often involve many federal agencies that have defined missions and different programs to achieve mitigation goals related to a specific hazard. Successful mitigation efforts require collaboration not only among federal agencies but also between state and local government agencies as well as a variety of nongovernmental entities, because natural hazard mitigation activities are primarily implemented at the state and local level. Accordingly, participation and, ultimately buy-in from a broad range of stakeholders—including state and local agencies, businesses, professional associations, nonprofit organizations, academia, and members of the community—are vital to the success of any mitigation effort. We identified a variety of ways that federal agencies collaborate with each other and with nonfederal stakeholders. The collaboration efforts are often aimed at establishing approaches to working together; clarifying priorities, roles and responsibilities; and aligning resources to accomplish common outcomes. First, consistent with key practices in collaboration, federal agencies involved in mitigation create hazard-specific strategies and programs for reducing losses from specific natural hazards. These strategies and programs detail the roles and responsibilities for the federal agencies involved in reducing hazard losses and show how the agencies will work together to achieve that goal. For example The National Landslide Hazards Mitigation Strategy, which was developed in 2003 by USGS, recognized that while there are many stakeholders involved in landslide mitigation in the United States, there is little collaboration of mitigation activities. The strategy recommends that collaboration be improved among federal, state, and local agencies in order to (1) establish more effective partnerships with the academic and private sectors and (2) better leverage resources. To eliminate duplication of efforts, the strategy names the federal agencies responsible for leading each activity—a key practice in collaboration. The strategy addresses the need for increased public awareness and education about landslides and names FEMA and USGS as the agencies responsible for leading the development of information and education programs. The 10-year Comprehensive Strategy for reducing wildland fire risks to communities and the environment involves many federal and nonfederal stakeholders. This strategy provides a collaborative framework to assist communities in implementing mitigation measures. Both the Departments of Agriculture and the Interior worked with the other stakeholders to develop a plan to implement the strategy. The plan identifies tasks associated with reducing losses from wildland fires, including identifying the level at which collaboration should occur as well as the stakeholders responsible for leading the task. For example, one task was to compile examples of local zoning ordinances and state planning efforts that have successfully reduced risks associated with wildland fire. The plan also specifies that collaboration should occur at the national, state, and local levels and that the National Association of Counties and the National Association of State Foresters would have leadership roles. The National Earthquake Hazards Reduction Program (NEHRP) is an interagency program created to reduce risks to life and property in the United States that result from earthquakes. In 2004, Congress established the Interagency Coordinating Committee to plan, manage, and coordinate the NEHRP. This committee consists of FEMA, USGS, the National Science Foundation, Office of Science and Technology Policy, Office of Management and Budget, and, as the lead agency, the National Institute of Standards and Technology. The agencies are working together to develop a NEHRP strategic plan and a coordinated interagency budget. The program also seeks to improve earthquake hazards identification and risk assessment methods. Each agency’s mission, although separate and distinct, has been integrated into a complementary program that seeks to promote earthquake mitigation. Second, federal agencies typically collaborate on mitigation activities after a disaster event in the areas that have been impacted. For example, the Department of Homeland Security issued the National Response Plan in December 2004, intending it to be an all-discipline, all-hazards plan establishing a single, comprehensive framework for the management of domestic incidents when federal involvement is necessary. The plan contains one component on postdisaster mitigation that addresses long- term community recovery and mitigation, but does not address predisaster mitigation efforts. Specifically, the plan provides a collaborative mechanism to assist communities that have been impacted by a disaster to (1) identify appropriate federal programs and agencies, (2) avoid duplication of assistance, and (3) ensure follow through of hazard mitigation efforts. These efforts can include developing long-term recovery plans for communities impacted by a disaster, that identify priorities in rebuilding and improving hazard resistance in new structures. FEMA is responsible for leading the effort to implement this component and is supported by six primary federal agencies as well as a number of other agencies that have a supportive role. Third, agency officials said that they also use a variety of informal mechanisms to collaborate on their mitigation activities. These officials discussed frequent, informal communication such as e-mails, teleconferences, and discussions at regional or local conferences or workshops that occurs on specific projects or initiatives. For example, FEMA officials said that officials from other agencies such as the Departments of Transportation and Energy frequently consult with FEMA staff on flood mitigation in compliance with an executive order on floodplain management. NOAA agency officials also commented that collaboration occurs when agency officials assist in conducting training for other federal agencies. For example, the National Weather Service provides guest instructors for a week-long FEMA training course for emergency managers. Finally, federal agencies collaborate through partnerships with local government leaders, volunteer groups, the business community, and individual citizens to implement mitigation activities. Several state and local officials with whom we spoke cited Project Impact—one of FEMA’s previous predisaster mitigation programs—as a model in helping to develop broad community support for predisaster mitigation activities. Local officials from each of the former Project Impact communities that we visited emphasized that the strength of the original public-private partnerships formed during Project Impact was a key reason their communities’ mitigation efforts have been sustainable. The program provided small, one-time grants directly to communities and empowered leaders in those communities to build effective partnerships and encourage private sector financial participation before disasters occurred. For example, Deerfield Beach, Florida—the first Project Impact community—established a program that created partnerships with FEMA, IBHS, and four local lending institutions to provide interest free loans from local banks to help community businesses conduct wind resistance mitigation activities, such as installing impact-resistant glass and shutters, to reduce the effects of high winds. While collaboration on specific hazard mitigation efforts occurs in a variety of ways, the current approach does not provide a strategic framework for coordinating nationwide pre- and postdisaster mitigation. In the past, such strategic frameworks were developed by FEMA and the Subcommittee on Disaster Reduction, of which FEMA is a participant. These frameworks shifted the focus from reacting to natural disasters to proactive coordinated pre- and postdisaster mitigation efforts. In 1998, we reported that FEMA had taken a strategic approach to mitigation, in part through the development of a National Mitigation Strategy. This strategy called for strengthening partnerships among all levels of government and the private sector and set forth major initiatives, along with timelines, in a number of areas, including leadership and coordination. For example, the strategy required that within 1 year mitigation considerations be integrated into the management and operation of all federal programs that affect the built environment and that a Federal Interagency Mitigation Task Force convene to more closely coordinate federal mitigation authorities, among other things. While these strategies helped to provide a strategic framework for natural hazard mitigation in the past, the current approach tends to occur on a hazard- specific basis, typically after a disaster event, or through informal methods and does not create a similar framework. Various provisions of federal laws have stressed the importance of national hazard mitigation. For example, recognizing that expenditures for federal disaster assistance were increasing without the likelihood of corresponding reductions in losses from natural disasters, DMA 2000, provides, among other things, a framework for linking pre- and postdisaster mitigation initiatives with public and private interests to ensure an integrated, comprehensive approach to disaster loss reduction. It requires establishing a federal interagency taskforce for the purpose of “coordinating the implementation of predisaster hazard mitigation programs administered by the Federal Government.” DMA 2000 further requires that the Administrator of FEMA serve as the chairperson of the taskforce, indicating the leadership role FEMA is expected to provide nationwide for all hazards. DMA 2000 also recognizes the need for nonfederal stakeholder involvement by including state and local governments in the taskforce. While this taskforce has yet to be created, the stated purpose of the taskforce, which is “coordinating the implementation of predisaster hazard mitigation programs administered by the Federal Government,” is consistent with the need for the creation of a comprehensive national strategic framework for mitigation. The Post-Katrina Emergency Management Reform Act of 2006 (Post- Katrina Reform Act), requires major changes to FEMA that are designed to increase the effectiveness of preparedness and response to catastrophic disasters. The act defines emergency management as “the governmental function that coordinates and integrates all activities necessary to build, sustain, and improve the capability to prepare for, protect against, respond to, recover from, or mitigate against threatened or actual natural disasters, acts of terrorism, or other man-made disasters.” Moreover, the act defines FEMA’s primary mission as reducing the loss of life and property “by leading and supporting the nation in a risk-based comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation.” While the current approach to collaboration on natural hazard mitigation often includes some key practices for collaboration, it tends to occur on a hazard-specific basis, typically after a disaster event, or through informal methods. This fragmented approach does not provide a comprehensive strategic framework for federal agencies and other stakeholders to collectively work toward accomplishing common national hazard mitigation goals. For example, while federal agency officials with whom we spoke discussed a variety of specific mitigation activities, it was unclear how these efforts fit into a comprehensive strategic framework for mitigation. Similar to the framework provided by the National Response Plan for managing domestic incidents and the frameworks provided in past national mitigation strategies, a comprehensive national framework for pre- and postdisaster mitigation would, among other things, define common national goals, establish joint strategies, leverage resources, assign roles and responsibilities, and develop performance measures and reporting requirements. A comprehensive strategic framework focused on mitigation activities that occur both before and after natural hazard events could strengthen FEMA’s ability to assess whether all mitigation efforts are working together to accomplish national hazard mitigation goals that adequately prepare the nation for its natural hazard risks. Without such a framework, the federal government may not be effectively identifying and managing all natural hazard risks nationwide. Moreover, the current approach does not ensure that collective mitigation efforts are working together in a manner that leverages resources and develops synergies across various hazard- specific mitigation efforts. No state in the country is immune to the risk from a natural hazard, be it floods, hurricanes, earthquakes, tornadoes, or wildland fires and large percentages of the U.S. population live in areas susceptible to more than one of these hazards. In particular, the coastal areas of the country, which contain a large portion of the nation’s population and have experienced substantial growth, are susceptible to many natural hazards. Moreover, the implications of climate change, which may lead to more frequent storms and sea-level rise, increase the vulnerability and risks associated with hazard events. All of these factors present increasing risks to life and property throughout the United States and increasing expenditures by the federal government in the wake of a disaster. As seen in recent years, the level of destruction that a natural hazard event can cause can be devastating to those who experience it and pose major challenges to the federal government, which plays a key role in disaster recovery and assistance. As more people migrate to hazard-prone areas such as Florida and California, the need for a comprehensive strategic framework for natural hazard mitigation takes on new significance because these areas are subject to multiple hazards. Additionally, according to the National Institute of Building Sciences, hazard mitigation activities have been found to be a sound investment with every $1 FEMA provides communities for mitigation activities, resulting in an average of $4 in future benefits. While the federal government plays a key role in natural hazard mitigation efforts, measures such as hazard mitigation planning, development regulations, and the adoption and enforcement of strong building codes are ultimately the responsibility of local jurisdictions, which make decisions on the extent of development and on how and where new developments are built. Therefore, finding ways to effectively partner with and develop buy-in from state and local governments is critical to any federal mitigation effort. Federal agencies, particularly FEMA, play an important role in establishing and promoting collaboration on natural hazard mitigation and in developing a national mitigation framework that includes nonfederal stakeholders as active participants in efforts to reduce losses from natural hazards. While the current approach to collaboration on natural hazard mitigation involves a mix of methods and may be useful on a hazard- specific basis or for a particular hazard event, having a fragmented approach does not take full advantage of synergies that may exist among the different mitigation stakeholders. For example, given that many natural hazards are related, such as hurricanes and flooding or wildland fires and landslides, there may be opportunities to leverage resources and for stakeholders responsible for specific hazards to collaborate with other stakeholders on related hazards, such as coordinating earthquake mitigation efforts with tsunami mitigation efforts. The creation of a strategic framework for pre- and postdisaster mitigation among all stakeholders nationwide could help overcome some of the challenges faced in implementing mitigation efforts and would help define common national goals for mitigation, identify risks, establish joint strategies across federal and state programs, leverage resources across agencies, assign lead roles and responsibilities, and include mechanisms to monitor, evaluate, and report on results. FEMA’s new organizational changes and responsibilities under the Post-Katrina Reform Act call for the agency to provide Federal leadership in promoting such a strategic framework for mitigation. The federal government could benefit from a comprehensive strategic framework, which could help to effectively identify national natural hazard risks, minimize the effects of hazards before they occur, and reduce overall future hazard losses to the nation. We recommend that the Administrator of FEMA, in consultation with other appropriate federal agencies, develop and maintain a national comprehensive strategic framework for mitigation that incorporates both pre- and postdisaster mitigation efforts. The framework should include items such as common mitigation goals; performance measures and reporting requirements; the role of specific activities in the overall framework; and the roles and responsibilities of federal, state, and local agencies, and nongovernmental stakeholders. We provided a draft of this report to FEMA, NOAA, USGS, the Corps of Engineers, and the Forest Service for review and comments. The Department of Homeland Security and the Department of the Interior provided written comments on behalf of FEMA and USGS, respectively, that are discussed below and presented in appendix II and III. FEMA generally agreed with our conclusions and recommendation but noted that we did not adequately reflect the success of the floodplain management requirements associated with NFIP, including the community rating system. We added language that FEMA suggested on the estimated annual losses avoided because of NFIP floodplain management activities. However, analyzing the overall effectiveness of floodplain management activities was beyond the scope of this report. FEMA also noted that it supported a national comprehensive strategic framework and setting common mitigation goals. However, the agency disagreed with setting performance measures and reporting requirements on a process that takes place largely at the local level. The letter stated that it would be inappropriate for FEMA or any other federal agency to dictate mitigation activities, outside of ensuring that mitigation plans and grant applications met the eligibility requirements defined in authorizing statutes and regulations. We agree that local communities are responsible for identifying natural hazard risks and for setting mitigation priorities. But mitigation activities could benefit from having federal agencies set performance measures to ensure that crosscutting agency goals are consistent and that program efforts are mutually reinforcing. With such practices in place, FEMA, in consultation with other federal agencies, could partner with and develop buy-in from state and local agencies and nongovernmental stakeholders. Trend analysis and reporting requirements, both of which FEMA cited as a more appropriate measure, would be consistent with our recommendation and could be effective in measuring the success of a comprehensive strategic mitigation framework. FEMA also commented that it participates on the Subcommittee on Disaster Reduction, which coordinates the scientific and technical aspects of risk identification and reduction across the federal government. The letter states that the subcommittee accomplishes several of the objectives identified in our recommendation. We added language clarifying that FEMA participates on the subcommittee. We cite the subcommittee as an example of a governmentwide group that has shifted its focus from reacting to natural disasters to proactively coordinating pre- and postdisaster mitigation efforts using science and technology. We see the subcommittee as an important component of, but not a substitute for, a national comprehensive strategic framework for mitigation. USGS wrote that the agency agreed with the need for a comprehensive strategy and emphasized that USGS believed in the importance of developing such a strategy together with FEMA and in equal partnership with the other agencies. A jointly developed national strategy could play a clear role in preparing for and dealing with natural hazards. USGS added that it would be helpful if the report identified the challenges associated with developing such a national framework. While identifying all the challenges was beyond the scope of this report, we did illustrate several of the obstacles to implementing a comprehensive strategic framework for mitigation—including the fragmented federal approach to mitigation— which does not take full advantage of synergies that may exist among mitigation stakeholders. Additionally, USGS stated that it would be helpful if we identified those programs that have delivered the best value in mitigation and the areas in which mitigation practices would be most effective. We agree with USGS that a discussion of successful mitigation practice is important. In this report, for example, we describe a variety of mitigation activities that exist to reduce the risk of losses from natural hazards, including hazard mitigation planning, the adoption and enforcement of more rigorous building codes, and the use of hazard control structures. The Corps of Engineers, Forest Service, and NOAA orally commented that they agreed with the report but did not comment specifically on the recommendation. Technical comments provided by the agencies have been incorporated in this report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Chairman and Ranking Member of the Senate Committee on Banking, Housing and Urban Affairs; the Chairman of the House Committee on Financial Services; the Secretaries of Agriculture, Commerce, Defense, Homeland Security, and Interior; and other interested parties. This report will also be available at no charge on GAO’s Web site http://www.gao.gov. Please contact me at (202) 512-8678 or [email protected] if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to examine the (1) natural hazards that present a risk to life and property in the United States, areas that are most susceptible to them, and factors that may be increasing these risks; (2) mitigation activities that reduce losses from natural hazards; (3) impediments to implementing and methods for encouraging mitigation activities; and (4) collaborative efforts of federal agencies and other stakeholders to promote mitigation. To examine the natural hazards that present a risk to life and property in the United States, we used a comprehensive list of natural hazards compiled by the Federal Emergency Management Agency (FEMA) in a guidance document for individual and community preparedness. The list includes floods, tornadoes, hurricanes, thunderstorms and lightning, winter storms and extreme cold, extreme heat, earthquakes, volcanoes, landslides and debris flow (mudslides), tsunamis, fires, and wildland fires. For the purposes of our analysis, we did not include fires because most home and other structure fires are human induced. To identify areas that are most susceptible to natural hazard risks, we created national, county- level maps that show the level of risk for each hazard. We limited our analysis to the 50 states and the District of Columbia. Additionally, we did not create maps for all natural hazards and narrowed the list of hazards we mapped to the following: floods, hurricanes, earthquakes, wildland fires, tornadoes, and landslides. These natural hazards were chosen based on the following criteria. First, we limited the scope of the word “property” in FEMA’s definition of hazard mitigation—actions taken to reduce or eliminate the long-term risks to life and property from the effects of hazards—to the built environment and, therefore, did not map hazards that result mainly in losses to agriculture. Next, we focused on hazards for which available mitigation activities are long-term loss reduction measures and not those that primarily focus on monitoring, warning systems, emergency response, and evacuations. Finally, we focused on the natural hazards that represent large annual losses in the United States (where data were available.) To develop our natural hazard risk maps, we used data from a variety of sources. We used historical hazard data from 1980 to 2005 as a representation of current hazard risk for floods, hurricanes, and wildland fires. For tornadoes, we limited our analysis of historical data from 1980 to 2004. Earthquake and landslide risk were mapped based on the level of future risk for an event occurring. The data used for each of the maps are explained below. We determined these data sources to be sufficiently reliable for our purposes. Floods – As a proxy for flood risk, we used FEMA data on counties that experienced a major disaster declarations for flooding. Hurricanes – We obtained data on historical hurricane tracks from the National Oceanic and Atmospheric Administration’s (NOAA) Coastal Service Center, which show the track for the eye of a hurricane, to develop the hurricane hazard map. In order to identify counties affected by a hurricane, we used a buffer of 50 miles around the data representing the eye of a hurricane. The 50-mile estimate was based on 29 miles for the eye of the storm and an additional 21 miles for the outer area of high winds. This is roughly equivalent to NOAA’s terminology of a hurricane “strike” or “near strike.” Earthquakes – We obtained data representing seismic risk from the U.S. Geological Survey’s (USGS) National Seismic Hazard Mapping Project. Risk is depicted as acceleration value. Areas with a value of less than 5 were considered low risk, 5 to 15 as medium risk, and over 15 as high risk. Wildland fires – We used the Federal Wildland Fire Occurrence Data maintained by the Desert Research Institute to represent wildland fire risk. These data are based on information compiled by the U.S. Forest Service (Forest Service), Bureau of Land Management, Bureau of Indian Affairs, National Park Service, and the Fish and Wildlife Service. Some records in the database were missing latitudinal and longitudinal information. Therefore, our map only includes fires for which this information was available. Tornadoes – We obtained historical tornado data from the NOAA’s National Weather Service that are available for download from www.NationalAtlas.gov. We limited our analysis to tornado events with an F3 (severe) or higher level as measured on the Fujita Scale, because F3 or higher tornadoes cause significant property damage. Not all records in the database included latitude and longitude information; therefore, our map only includes those tornadoes for which latitude and longitude data were available. Additionally, data were only available through 2004. Landslides – We obtained data representing the susceptibility and incidence of landslides from USGS. We used USGS’ classification of areas of high and moderate risk and overlaid it with county data. Counties that contained areas of both high and moderate risk were reclassified as “combination of high and moderate risk.” We also reviewed the annual losses associated with some of these natural hazards, when data were available, and factors that may be increasing natural hazard risk. As there is no comprehensive source of loss information for natural hazards, we used estimates developed by the federal agencies responsible for overseeing each natural hazard. We adjusted the loss estimates from some historical hazard events to 2006 dollars using the Consumer Price Index for all Urban Consumers. To identify factors that may be increasing natural hazard risks, we reviewed 2000 U.S. Census data, population information, and studies on climatology. We also reviewed previous congressional reports and our reports and spoke with officials at several federal, state, and local agencies. To examine the mitigation activities that exist to reduce losses from natural hazards and the performance of these activities, we conducted site visits to four judgmentally selected states including—California, Colorado, Florida, and Oklahoma. We selected the locations based on the following criteria: (1) the locations represent a variety of natural hazard risks and geographic locations; (2) Florida and Oklahoma have Enhanced State Hazard Mitigation Plans and, therefore, comprehensive mitigation programs, and California has an Enhanced State Hazard Mitigation Plan that is pending FEMA approval; and (3) mitigation experts and federal agency officials recommended locations to visit within these states. In each of these states, we met with state and local officials to discuss mitigation activities that had been undertaken or are planned, and examples of the performance of some of the activities. Many of the local communities we visited were part of FEMA’s former Project Impact program. We also visited the Natural Hazards Research Center in Boulder, Colorado to review an extensive collection of research on natural hazards. Additionally, we reviewed FEMA’s Best Practices and Case Studies Portfolio; prior GAO reports; state and local hazard mitigation plans submitted to FEMA under DMA 2000; and numerous other reports, summaries, and studies on natural hazard mitigation activities. We also discussed the types of existing mitigation activities with officials from federal agencies that oversee natural hazard mitigation programs, and with mitigation and planning experts. In addition, we met with industry, nonprofit, and professional organizations; model building code organizations; an insurance services company; and a risk modeling firm to discuss the variety of mitigation methods that exist. To examine impediments that exist to the implementation of mitigation activities and methods used to promote mitigation, we reviewed congressional reports, our previous reports and testimonies, and background documents related to each of the natural hazards within the scope of our review. These included policy and research documents on floods, hurricanes, earthquakes, wildfires, tornadoes, and landslides, as well as documents on other natural hazards. We also gathered and analyzed information, documents, reports, and publications from each of the federal agencies we contacted, including FEMA, USGS, NOAA, the Corps of Engineers, and the Forest Service. In addition, we reviewed information provided by professional associations, advocacy groups, nonprofit organizations, and knowledgeable individuals from the academic and research communities, such as the American Society of Civil Engineers, the American Planning Association, Wildlife Federation, and the University of Colorado at Boulder. To examine the various approaches used to encourage mitigation, we conducted interviews, conference calls, and site visits with federal, state, and local officials and members of the academic community to obtain detailed information and specific examples of methods used to promote mitigation. To examine collaborative efforts of federal agencies and other stakeholders to promote mitigation, we conducted literature reviews of prior reports on natural hazard mitigation, land-use, research, and policy documents from federal, state, and local government agencies, and documentation from nongovernmental stakeholders. We also reviewed our previous reports on federal agency collaboration and summarized the results of these reports to identify elements for effective collaboration among federal agencies and between federal agencies and nonfederal participants. In addition, we consulted with individuals knowledgeable about natural hazards, mitigation, and the role of federal agencies in promoting collaboration on natural hazard mitigation activities. To examine ways federal agencies and nonfederal participants collaborate on mitigation we interviewed federal officials involved in mitigation-related activities, state and local officials, and industry association representatives. To identify the federal agencies that play key roles in natural hazard mitigation, we considered federal agencies that promote mitigation through (1) hazard mitigation grant programs; (2) technical assistance; (3) regional risk assessments, including mapping of hazard risk; (4) information dissemination; and (5) programs that specifically target the reduction of risks caused by natural hazards. We also determined that it was not feasible to include all federal agencies that play a role in mitigation within the scope of this review and excluded agencies that play supplementary, support, and/or secondary roles in natural hazard mitigation. Based on these considerations, we subsequently contacted five federal agencies as part of this review, including FEMA, NOAA, USGS, the Corps of Engineers, and the Forest Service. We conducted our work in Baltimore, Maryland; Berkeley, Napa, San Francisco, and Sacramento, California; Boston, Massachusetts; Boulder, Denver, Golden, and Fort Collins, Colorado; Deerfield Beach, Miami, Tampa, and West Palm Beach, Florida; Oklahoma City and Tulsa, Oklahoma; and Washington, D.C., between March 2006 and June 2007 in accordance with generally accepted government auditing standards. In addition to the person named above, Andy Finkel, Assistant Director; Nicholas Alexander; Emily Chalmers; Leo Chyi; Isidro Gomez; Eileen Harrity; Christine Houle; Kai-Yan Lee; John Mingus; Marc Molino; Omyra Ramsingh, and William Sparling made key contributions to this report.
|
The nation has experienced vast losses from natural hazards. The potential for future events, such as earthquakes and hurricanes, demonstrates the importance of hazard mitigation--actions that reduce the long-term risks to life and property from natural hazard events. GAO was asked to examine (1) natural hazards that present a risk to life and property in the United States, areas that are most susceptible to them, factors that may be increasing these risks, and mitigation activities that reduce losses; (2) methods for encouraging and impediments to implementing mitigation activities; and (3) collaborative efforts of federal agencies and other stakeholders to promote mitigation. To address these objectives, GAO collected and analyzed hazard data, reviewed population information, conducted site visits to locations with comprehensive mitigation programs, and collected information from relevant agencies and officials. Natural hazards present risks to life and property throughout the United States. Flooding is the most widespread and destructive of these, resulting in billions of dollars in property losses each year. Hurricanes, earthquakes, and wildland fires also pose significant risks in certain regions of the country. Tornadoes, landslides, tsunamis, and volcanic eruptions can also occur in some areas. Population growth in hazard-prone areas, especially coastal areas, is increasing the nation's vulnerability to losses because more people and property are at risk. Climate change may also impact the frequency and severity of future natural hazard events. A variety of natural hazard mitigation activities exist, which are primarily implemented at the state and local level, and include hazard mitigation planning; strong building codes and design standards; and hazard control structures (e.g., levees). For example, strong building codes and design standards can make structures better able to withstand a hazard event and hazard control structures help protect existing at-risk areas. Public education, financial assistance, and insurance discounts can help encourage mitigation. For example, federal, state, and local governments provide financial assistance to promote mitigation and insurance discounts can encourage the use of mitigation measures. However, significant challenges exist to implementing natural hazard mitigation activities. Some of these challenges include the desire for local economic development--often in hazard-prone areas--which may conflict with long-term mitigation goals and the cost of mitigation may limit the amount of activities that occur. FEMA, other federal agencies, and nonfederal stakeholders have collaborated on natural hazard mitigation, but the current approach is fragmented and does not provide a comprehensive national strategic framework for mitigation. Collaboration typically occurs on a hazard-specific basis, after a disaster, or through informal methods. A comprehensive framework would help define common national goals, establish joint strategies, leverage resources, and assign responsibilities among stakeholders.
|
Several actions—both by the Service and the Congress—led us to remove the Service’s transformation efforts and long-term outlook from our high- risk list. In 2001, we made this designation because the Service’s financial outlook had deteriorated significantly. The Service had a projected deficit of $2 billion to $3 billion, severe cash flow pressures, debt approaching the statutory borrowing limit, cost growth outpacing revenue increases, and limited productivity gains. Other challenges the Service faced included liabilities that exceeded assets by $3 billion at the end of fiscal year 2002 major liabilities and obligations estimated at close to $100 billion, a restructuring of the workforce due to impending retirements and operational changes, and long-standing labor-management relations problems. We raised concerns that the Service had no comprehensive plan to address its financial, operational, or human capital challenges, including its plans for reducing debt, and it did not have adequate financial reporting and transparency that would allow the public to understand changes in its financial situation. Thus, we recommended that the Service develop a comprehensive plan, in conjunction with other stakeholders, which would identify the actions needed to address its challenges and provide publicly available quarterly financial reports with sufficient information to understand the Service’s current and projected financial condition. As the Service’s financial difficulties continued in 2002, we concluded that the need for a comprehensive transformation of the Service was more urgent than ever and called for Congress to act on comprehensive postal reform legislation. The Service’s basic business model, which assumed that rising mail volume would cover rising costs and mitigate rate increases, was outmoded as First-Class Mail volumes stagnated or deteriorated in an increasingly competitive environment. Since 2001, the Service’s financial condition has improved and it has reported positive net incomes for each of the last 4 years (see fig. 1). The Service has made significant progress in addressing some of the challenges that led to its high-risk designation. For example, the Service’s management developed a Transformation Plan and has demonstrated a commitment to implementing this plan. Since our designation in 2001, the Service has: Reduced workhours and improved productivity: The Service has reported productivity gains in each year. According to the Service, its productivity increased by a cumulative 8.3 percent over that period, which generated $5.4 billion in cost savings. The Service reported eliminating over 170 million workhours over this period, with a 4.5 million workhour reduction in fiscal year 2006. Downsized its workforce: The Service has made progress in addressing some of the human capital challenges associated with its vast workforce, by managing retirements, downsizing, and expanding the use of automation. At the end of fiscal year 2006, the Service reported that it had 696,138 career employees, the lowest count since fiscal year 1993. Attrition and automation have allowed the Service to downsize its workforce by more than 95,000, or about 10 percent, since fiscal year 2001. Enhanced the reporting of its financial condition and outlook: The Service responded to recommendations we made regarding the lack of sufficient and timely periodic information on its financial condition and outlook that is publicly available between publications of its audited year-end financial statements by enhancing its financial reporting and providing regular updates to the financial statements on its Web site. The Service instituted quarterly financial reports, expanded the discussion of financial matters in its annual report, and upgraded its Web site to include these and other reports in readily accessible file formats. The 2003 pension act provided another key reason for why we removed the high-risk designation. Much of the Service’s recent financial improvement was due to the change from this law that reduced the Service’s annual pension expenses. Between fiscal years 2003 and 2005, the Service had a total of $9 billion in decreased pension expenses when compared to the annual expenses that would have been paid without the statutory change. This change enabled the Service to significantly cut its costs, achieve record net incomes, repay over $11 billion of outstanding debt, and delay rate increases until January 2006. The Service’s improved financial performance and condition during this time was also aided by increased revenue generated from growing volumes of Standard Mail (primarily advertising) and rate increases in June 2002 and January 2006. Standard Mail volumes grew by almost 14 percent from fiscal year 2001 to 2006, and Standard Mail revenues, when adjusted for inflation, increased by over 11 percent during the same time period. In June 2002, the Service implemented a rate increase (the price of a First-Class stamp increased from 34 cents to 37 cents) to offset rising costs. In January 2006, the Service implemented another rate increase (the price of a First-Class stamp increased from 37 cents to 39 cents) to generate the additional revenue needed to set aside $3.0 billion in an escrow account in fiscal year 2006 as required by the 2003 pension law. Revenues in fiscal year 2006 increased by about 4 percent from the previous year due largely to the January 2006 rate increase. The passage of the recent postal reform legislation was another reason why we removed this high-risk designation. Although noticeable improvements were being made to the Service’s financial, operational, and human capital challenges, we had continued to advocate the need for comprehensive postal reform legislation. After years of thorough discussion, Congress passed a comprehensive postal reform law in late December 2006 that provides tools and mechanisms that can be used to establish an efficient, flexible, fair, transparent, and financially sound Postal Service. Later in this statement, I will discuss how some specific tools and mechanisms can be used to address the continuing challenges facing the Service. The Service’s financial condition for fiscal year 2007 has been affected by the reform act, which, along with the May change in postal rates, will continue to affect its near- and long-term financial outlook. The Service will benefit financially from an increase in postal rates in May averaging 7.6 percent. Key steps in the rate process are provided in appendix I. The Service is estimating that it will gain an additional $2.2 billion in net income in fiscal year 2007 as a result of the new rates. The recent rate case, in addition to generating additional revenues, took significant strides in aligning postal rates with the respective mail handling costs. Some rate increases are particularly large—i.e., some catalog rates may increase by 20 to 40 percent. The new rates structure is aimed at providing the necessary incentives to encourage efficient mailing practices (e.g., shape, weight, handling, preparation, and transportation) and thereby encourage smaller rate increases and steady mail volumes in the longer run. At the beginning of fiscal year 2007 (before the enactment of the reform law), the Service expected to earn $1.7 billion in net income, which reflected the additional revenue the Service estimated it would receive from the May increase in postal rates. The Service, however, planned to increase its outstanding debt of $2.1 billion at the end of fiscal year 2006 by an additional $1.2 billion in fiscal year 2007 in order to help fund the expected $3.3 billion escrow requirement for 2007. Since enactment of the reform law, the Service has updated its expense projections. While the Service’s total expenses for fiscal year 2007 have been affected by passage of the act, those expenses not directly related to the act and total revenues have tracked closely to plan. The Service currently is estimating an overall fiscal year 2007 net loss of $5.2 billion, largely due to changes in either projected or actual Postal Service payments as a result from the act including: Accelerating funding of the Service’s retiree health benefit obligations: Beginning this fiscal year, the Postal Service must make the first of 10 annual payments into a newly created Postal Service Retiree Health Benefits Fund (PSRHBF) to help fund the Service’s significant unfunded retiree health obligations. The 2007 payment of $5.4 billion is due to be paid by September 30. The Service has accrued half of this expense— $2.7 billion—during the first 6 months of the fiscal year and will accrue $1.35 billion in each of the remaining 2 quarters. One-time expensing of funds previously set aside in escrow and eliminating future escrow payments: The act requires the Service to transfer the $3.0 billion it escrowed in fiscal year 2006 to the PSRHBF, which the Service recognized as a one-time expense in the first quarter of fiscal year 2007. The reform act also eliminated future escrow payments required under the 2003 pension law, including the $3.3 billion payment scheduled for fiscal year 2007. Transferring funding for selected military service benefits back to the Treasury: The act significantly reduced the Service’s civil service pension obligations by transferring responsibility for funding civilian pension benefits attributable to the military service of the Service’s retirees back to the Treasury Department, where it had been prior to enactment of the 2003 pension law. The reform act requires that any overfunding attributable to the military benefits as of September 30, 2006, be transferred to the PSRHBF by June 30, 2007. Eliminating certain annual Civil Service Retirement System (CSRS) pension funding requirements: The act eliminated the requirement that the Service fund the annual normal cost of its civil service employees and the amortization of the unfunded pension obligation that existed prior to transferring the military service obligations to the Treasury Department. The Service estimates that it will save $1.5 billion in fiscal year 2007 from eliminating the annual pension funding requirements and amortization payments. The result of these payments is a net increase in retirement-related expenses of $3.9 billion, which is $600 million higher than the expected $3.3 billion escrow payment for 2007 that was eliminated. Thus, the Service is planning to borrow $600 million more than initially budgeted to cover this shortfall. This increase is anticipated to result in the Service’s borrowing $1.8 billion in fiscal year 2007, which would bring its total outstanding debt to $3.9 billion by the end of the fiscal year. The Service has identified other factors and uncertainties that, depending on how results vary from budgeted estimates, could have a favorable or unfavorable impact on the Service’s projected net loss for fiscal year 2007. For example, volumes and revenues may be affected by a continued slowdown in the U.S. economy or unanticipated consequences of the recent rate decision. The Service has anticipated economic growth to pick up in the third and fourth quarters of this year, but a slowdown may depress volume growth below projected levels for the rest of the year. Furthermore, the unusual nature of the rate case creates uncertainties for the Service that may affect its financial results. These uncertainties include how the Service and its customers will respond to the: limited implementation times—the 2-month implementation period (the Postal Service Board of Governors decision on March 19, 2007, stated that most new rates would become effective on May 14, 2007) leaves little time for the Service to educate the public and business mailers on the new rate changes and to allow mailers sufficient time to adjust their mailing practices and operations accordingly; delayed implementation times—how mailers and the Service will be affected by the delay in implementing new Periodical rates until mid-July; magnitude of certain restructured rates, particularly for those specific types of mail that will experience rather significant increases, and the related impact on volumes and revenues; and unfamiliarity with restructured rates—the prices for many popular products, such as certain types of First-Class Mail, will experience significant shifts based on the shape of the mail. For example, figure 2 shows how the cost of First-Class Mail will differ based on its shape. Moreover, the Service’s expense projections may be susceptible to rising fuel prices due to the Service’s vulnerability in this area or that the outstanding contract negotiations for two of its major labor unions would vary from projected levels. Although the extent to which these factors and uncertainties will affect the Service’s financial condition for fiscal year 2007 is not known, they may affect its subsequent financial outlook. For example, if the Service finds that its financial performance and condition is weakening—either through revenue shortfalls or expense increases—it may decide to file another rate increase later this year. The new postal reform law provides new opportunities to address challenges facing the Service as it continues its transformation in a more competitive environment with a variety of electronic alternatives for communications and payments. Specifically, it provides tools and mechanisms to address the challenges of generating sufficient revenues, controlling costs, maintaining service, providing reliable performance information, and managing its workforce. Effectively using these tools will be key to successfully implementing the act and addressing these challenges. The Service continues to face challenges in generating sufficient revenues as First-Class Mail volume continues to decline and the mail mix changes. First-Class Mail, historically the class of mail with the largest volumes and revenues, saw volume shrink by almost 6 percent from fiscal year 2001 to 2006. The trends for First-Class Mail and Standard Mail, which currently combine for about 95 percent of mail volumes and 80 percent of revenues, experienced a historical shift in fiscal year 2005. For the first time, Standard Mail volumes exceeded those for First-Class Mail (see fig. 3). This shift has major revenue implications because: First-Class Mail generates the most revenue and is used to finance most of the Service’s institutional (overhead) costs (see fig. 4). Standard Mail generates less revenue per piece compared to First-Class Mail and it takes about two pieces of Standard Mail to make the same contribution to the Service’s overhead costs as one piece of First-Class Mail. Standard Mail is a more price-sensitive product compared to First-Class Mail because it competes with other advertising media. Also, because advertising, including Standard Mail, tends to be affected by economic cycles to a greater extent than First-Class Mail, a larger portion of the Service’s mail volumes is more susceptible to economic fluctuations. The act provides tools and mechanisms that can help address these revenue challenges by promoting revenue generation and retention of revenues. The act established flexible pricing mechanisms for the Service’s competitive and market-dominant products. For example, it allows the Service to raise rates for its market-dominant products, such as First-Class Mail letters, Standard Mail, and Periodicals, up to a defined price cap; exceed the price cap should extraordinary or exceptional circumstances arise; and use any unused rate authority within 5 years. For its competitive products, such as Priority Mail or Expedited Mail, the Service may raise rates as it sees fit, as long as each competitive products covers its costs and competitive products as a whole cover their attributable costs and make a contribution to overhead. The act also allows for the Service to retain any earnings, which may promote increased financial stability. First, to the extent the Service can generate net income to retain earnings, this could enhance its ability to weather economic downturns. For example, a slow economic cycle or sudden increase in fuel prices might not necessitate an immediate rate increase if sufficient retained earnings exist to cover related shortfalls. Furthermore, to the extent the Service can retain earnings as liquid assets, it may reduce the Service’s reliance on borrowing to offset cash shortfalls. The Service has stated that it will take out debt to cover cash shortfalls in fiscal year 2007 and projects that this increase will result in $3.9 billion of outstanding debt at the end of the year (see fig. 5). Controlling debt will be important because the Service needs to operate within its statutorily set borrowing limits ($3 billion in new debt each year, and $15 billion in total debt outstanding). Reducing debt was one of the key factors we cited in removing the Service’s high-risk designation. Uncertainties related to the recent rate decision and reform law may impact the extent to which the Service is able to address its revenue related challenges. The uncertainties include: How will mailers and volume respond to the new rate decision’s pricing signals? What types of innovative pricing methods will be allowed? How will the Service set rates under the new price cap system, and how will mailers respond to this additional flexibility? How will the Service and mailers be able to modify their information systems to accommodate more frequent rate increases? How will customer behavior change as prices change under the new system? To what extent will customers desire for mail be affected by privacy concerns, environmental concerns, preference for electronic alternatives, or efforts to establish Do Not Mail lists? How will the Service be able to enhance the value of its market-dominant and competitive products (e.g., predictable and consistent service, tracking and tracing capabilities, etc.)? What will the Service do with any retained earnings (e.g., improve its capital program, save to weather downturns in the economy)? The Service faces multiple cost pressures in the near- and long-term associated with the required multi-billion dollar payments into the PSRHBF, dealing with key cost categories experiencing above-inflation growth while operating under an inflationary-based price cap, and other costs associated with providing universal postal services to a growing network—one now expanding by about 2 million new addresses each year. While the reform act takes actions that increase current costs by improving the balance of retiree health benefit cost burdens between current and future ratepayers, it also eliminates other payments and provides opportunities to offset some of these costs pressures through efficiency gains that could restrain future rate increases. It will be crucial for the Service, however, to take advantage of this opportunity and achieve sustainable, realizable cost reductions and productivity improvements throughout its networks. Personnel expenses (which include wages, employee and retiree benefits, and workers’ compensation) have consistently accounted for nearly 80 percent of annual operating expenses, even though the Service has downsized its workforce by over 95,000 employees since fiscal year 2001. The Service’s personnel expenses have grown at rates exceeding inflation since fiscal year 2003 and are expected to continue dominating the Service’s financial condition. In particular, growth in retiree health benefit costs have, on average over the last 5 years, exceeded inflation by almost 13 percent each year. This growth is expected to continue due to (1) rising premiums, growth in the number of covered retirees and survivors, and increases in the Service’s share of the premiums; and (2) the Service will continue paying the employer’s share of the health insurance premiums of its retirees along with the required payments ranging from $5.4 billion to $5.8 billion into the PSRHBF in each of the following 9 years. While we recognize the cost pressures that will be placed on the Service as it begins prefunding its retiree health benefits obligations, we continue to believe that such action is appropriate to improve the fairness and balance of cost burdens for current and future ratepayers. Furthermore, beginning in fiscal year 2017, the Service might enjoy a significant reduction in its retiree health costs if its obligations are fully funded. In addition to these personnel expenses, the Service has also experienced growth in its transportation costs that exceeded the rate of inflation in fiscal years 2005 and 2006. Transportation costs represent the second largest cost category behind compensation and benefits. These costs grew by about 11 percent from fiscal year 2005 to 2006, largely due to rising fuel costs. In a February 2007 report, we stated that the Service is vulnerable to fuel price fluctuations and will be challenged to control fuel costs due to its expanding delivery network and inability to use surcharges. The Service has made some progress in containing cost growth, and pledged to cut another $5 billion of costs out of its system between fiscal years 2006 and 2010 through productivity increases and operational improvements. The Service has reported productivity increases for the last 7 years, but the reported increase in fiscal year 2006 was its smallest during this period. The Service has recently had trouble absorbing gains in mail volumes while achieving targeted workhour reductions. Although the Service has reduced its workhours in 6 of the last 7 years, in fiscal year 2006, its goal was to reduce workhours by 42 million, but the Service reported a decrease of only 5 million workhours. While both the recent rate decision and reform act seek to improve efficiencies in the postal networks, these developments will pose challenges to the Service. In terms of the rate case, the Service will be challenged to modify its mail processing and transportation networks to respond to changes in mailer behaviors (e.g., in the quantity and types of mail sent and how mail is prepared) to minimize their rates. Furthermore, the reform act provides an opportunity to address the Service’s cost challenges because it requires the Service to develop a plan that, among other things, includes a strategy for how the Service intends to rationalize the postal facilities network and remove excess processing capacity and space from the network, as well as identifying the costs savings and other benefits associated with network rationalization alternatives discussed in the plan. This plan provides an opportunity to address some concerns we have raised in previous work, in which we stated that it was not clear how the Service intended to realign its processing and distribution network and workforce, and that its strategy lacked sufficient transparency and accountability, excluded stakeholder input, and lacked performance measures for results. We are currently conducting ongoing work on the Service’s progress in this area over the past 2 years and will be issuing a report this summer with updated findings. Taking advantage of the opportunities available will have a direct impact on the Service’s ability to operate under an inflationary-based rate cap, achieve positive results, and limit the growth in its debt. If the Service is unable to achieve significant cost savings, it may have to take other actions such as borrow an increasing amount each year to make year-end property and equipment purchases and fund its retiree health obligations. The following uncertainties may have a significant impact on the Service’s ability to achieve real cost savings and productivity in the future: How will operating under a rate cap provide an incentive to control costs? How will the Service operate under a rate cap, if certain key costs continue to increase at levels above inflation (e.g., health benefit costs)? How will the new rate designs/structure lead to efficiency improvements throughout the mail stream? Will the Service’s implementation of its network realignment result in greater cost savings and improved efficiency? Will the Service achieve its expected return on investment and operational performance when it deploys the next phase of automated flat sorting equipment? How will the Service’s financial situation be impacted when the 10-year scheduled payments into the PSRHBF are completed? Will the balance of the PSRHBF—which is a function of the PSRHBF’s investment returns and the growth in the Service’s retiree health obligations—be sufficient to cover the Service’s retiree health obligation by the end of fiscal year 2016? The Service will be challenged to continue carrying out its mission of providing high-quality delivery and retail services to the American people. Maintaining these services while establishing reliable mechanisms for measuring and reporting performance will be critical to the Service’s ability to effectively function in a competitive market and meet the needs of various postal stakeholders, including: The Service—so that it can effectively manage its nationwide service and respond to changes and/or problems in its network. The Service’s customers (who may choose other alternatives to the mail)—so that they are aware of the Service’s expected performance, can tailor their operations to those expectations, and understand the Service’s actual performance against those targets. Oversight bodies—so that they are aware of the Service’s ability to carry out its mission while effectively balancing costs, service needs, and the rate cap; can hold the Service accountable for its performance; and understand service performance (whether reported problems are widespread or service is getting better or worse). The Service’s delivery performance standards and results have been a long-standing concern for mailers and Congress. We found inadequate information is collected and available to both the Service and others to understand and address delivery service issues. Specifically, the Service does not measure and report its delivery performance for most types of mail (representative measures of delivery performance cover less than one-fifth of mail volume and do not include key types of mail such as Standard Mail, bulk First-Class Mail, Periodicals, and most Package Services), certain performance standards are outdated; and that progress has been hindered by a lack of management commitment and collaboration by the Service and mailers. Based on these findings, we recommended the Service take actions to modernize its delivery service standards, develop a complete set of delivery service measures, more effectively collaborate with mailers, and improve transparency by publicly disclosing delivery performance information. The Service has recently reported positive delivery results for the limited segment of mail for which the Service does track performance. It has reported on-time delivery performance improved in the first quarter of fiscal year 2007 for some single-piece First-Class Mail. However, issues such as late deliveries have been reported in places such as Chicago, Los Angeles, and El Paso; and for different types of mail such as Standard Mail and Periodicals. Figure 6 shows that delivery performance in Chicago for this type of mail was worse than the national average at the end of the first quarter for this fiscal year. The reform act provides an opportunity for the Service to address this challenge by establishing requirements for maintaining, measuring, and reporting on service performance. Specifically, the act identified four key objectives for modern service standards: enhance the value of postal services to both senders and recipients; preserve regular and effective access to postal services in all communities, including those in rural areas or where post offices are not self-sustaining; reasonably assure Postal Service customers delivery reliability, speed, and frequency consistent with reasonable rates and best business practices; and provide a system of objective external performance measurements for each market-dominant product as a basis for measurement of Postal Service performance. The act also required the Service to implement modern delivery performance standards, set goals for meeting these standards, and annually report on its delivery speed and reliability for each market- dominant product. Key steps specified in the act include that within 12 months of enactment (by December 2007) the Service must issue modern service standards, and within 6 months of issuing service standards the Service must, in consultation with the PRC, develop and submit to Congress a plan for meeting those standards. Furthermore, within 90 days after the end of each fiscal year the Service must report to PRC on the quality of service for each market-dominant product in terms of speed of delivery and reliability, as well as the degree of customer satisfaction with the service provided. These requirements provide opportunities to resolve long-standing deficiencies in this area. As the Service transitions to the new law, the following uncertainties may impact its ability to address challenges in maintaining, measuring, and reporting service performance in the future: How will the Service implement representative measures of delivery speed and reliability within the timeframes of the reform act, while taking cost and technological limitations into account? How much transparency will be provided to the PRC, Congress, mailers, and the American people, including the frequency, detail, and methods of reporting? Another challenge facing the Service is to provide reliable data to management, regulators, and oversight entities to assess financial performance. Accurate and timely data on Service costs, revenues, and mail volumes helps provide appropriate transparency and accountability for all postal stakeholders to have a comprehensive understanding of the Service’s financial condition and outlook and how postal rates are aligned with costs. Earlier I discussed the past issues we have raised related to the Service’s financial reporting and the improvements that the Service has recently made. We have also reported on the long-standing issues of ratemaking data quality that continue to persist. The act establishes new reporting and accounting requirements that should help to address this challenge. The major change is the establishment of, and authority provided, to the new PRC to help enhance the collection and reporting of information on postal rates and financial performance (see table 2). Service officials have acknowledged the importance of financial reporting, but stated that there are cost implications associated with these improvements. The Service has recognized that it will incur costs in complying with the Securities and Exchange Commission’s (SEC) internal control reporting rules and by changes needed to provide separate information for competitive and market-dominant products. We have reported that significant costs have been associated with complying with the SEC’s implementing regulations for section 404 of the Sarbanes-Oxley Act, but have also reported that costs are expected to decline in subsequent years given the first-year investment in documenting internal controls. As the Service transitions to these new reporting and accounting requirements, its ability to address future challenges in this area will be impacted by uncertainties including: How will the PRC use its discretion to define and implement the new statutory structure? What criteria will PRC use for evaluating the quality, completeness, and accuracy of ratemaking data, including the underlying accounting data and additional data used to attribute costs and revenues to specific types of mail? How will PRC balance the need for high-quality ratemaking data with the time and expense involved in obtaining the data? How will PRC structure any proceedings to improve the quality of ratemaking data and enable the Service and others to participate in such proceedings? What proceedings might PRC initiate to address data quality deficiencies and issues that PRC has raised in its recent decision on the rate case? How will the Service be impacted by the costs associated with complying with the SEC rules for implementing section 404 of the Sarbanes-Oxley Act, as well as for the requirement of separate information for competitive and market-dominant products? The Service will be challenged to manage its workforce as it transitions to operating in a new postal environment. The Service is one of the nation’s largest employers, with almost 800,000 full and part-time workers. Personnel-related costs, which include compensation and benefits, are the Service’s major cost category and are expected to increase due to the reform legislation requirements to begin prefunding retirement health benefit costs. We have reported on the human capital challenges facing the Service, but have found the Service has made progress in addressing some of these challenges by managing retirements, downsizing, and expanding the use of automation. Provisions in the reform act related to workforce management can build on these successes. As part of the Postal Service Plan mandated by the act, the Service must describe its long-term vision for realigning its workforce and how it intends to implement that vision. This plan is to include a discussion of what impact any facility changes may have on the postal workforce and whether the Postal Service has sufficient flexibility to make needed workforce changes. The Service, however, faces human capital challenges that will continue to impact its financial condition and outlook: Outstanding labor agreements: Labor agreements with the Service’s four major unions expired late in calendar year 2006. In January 2007, the Service reached agreements with two of these unions, including semi- annual cost-of-living adjustments (COLA) and scheduled salary increases. Labor agreements, however, remain outstanding for the other two unions that cover over 42 percent of its career employees. Workforce realignment: As the Service continues to make significant changes to its operations (i.e., rationalize its facilities, increase automation, improve retail access, and streamline its transportation network), it will be challenged to realign its workforce based on these changes. This challenge may become more significant as mailers alter their behavior in response to the new rate structure. These actions will require a different mix in the number, skills, and deployment of its employees, and may involve repositioning, retraining, outsourcing, and/or reducing the workforce. Retirements: The Service expects a significant portion of its career workforce—over 113,000 employees—to retire within the next 5 years. In particular, it expects nearly half of its executives to retire during this time. The Service’s decisions regarding these retirements (that is, whether or not to fill these positions, and if so, when, with whom, and where) may have significant financial and operational effects. The following uncertainties will affect the Service’s ability to address workforce-related challenges in the future: How will the Service be able to respond to operational changes? How will the Service balance the varying needs of diverse customers when realigning its delivery and processing networks? How will employees and employee organizations be affected and informed of network changes and how will the Service monitor the workplace environment? How will the resolutions to the outstanding labor agreements affect the Service’s financial condition? How will the Service take advantage of flexibilities, including allowing more casual workers to deal with peak operating periods? The Postal Service, the PRC, and mailers face a challenging environment with significant changes to make in the coming months related to implementing the recent rate decision and the new postal reform law. We have identified several major issues considered significant by various postal stakeholders, as well as areas related to implementation of the law that will warrant continued oversight. Specifically, focusing attention on these issues during this important transition period will help to ensure that the new statutory and regulatory requirements are carried out according to the intent of the reform act and that the Service’s future financial condition is sound. These key issues and areas for continued oversight include: the effect of the upcoming rate increases and statutory changes on the Postal Service’s financial condition; the decision by the Service whether or not to submit a rate filing under the old rate structure; actions by the PRC to establish a new price-setting and regulatory the Service’s ability to operate under an inflationary price cap while some of its cost segments are increasing above the rate of inflation; actions by the Service, in consultation with the PRC, to establish modern service standards and performance measures, and the Postal Service’s plan for meeting those standards; the Service’s ability to maintain high-quality delivery service as it takes actions to reduce costs and realign its infrastructure and workforce; and the PRC’s development of appropriate accounting and reporting requirements aimed at enhancing transparency and accountability of the Service’s internal data and performance results. One of the most important decisions for monitoring in the short term is whether or not the Service decides to file another rate increase before the new rate structure takes effect. The trade-offs involved in the Service’s decision on whether to file under the new or old systems include weighing the respective costs, benefits, and possible unintended consequences of the Service’s need for new rates along with the time and resources required by the Service, the PRC, and the mailing industry to proceed under either the new or old systems. For example, the Service may benefit from filing under the old system because it would allow the Service to further align costs with prices prior to moving into price-cap restrictions. Under the old rules, the Service would have to satisfy the “break-even” requirements that postal revenues will equal as nearly as practicable its total estimated costs. Under the new rules, the Service would have to ensure that rate increases for its market-dominant products do not exceed a cap based on the growth in the Consumer Price Index. Filing under the old system, however, could put additional strain on mailers and the PRC. In particular, the PRC would be reviewing the Service’s rate submission while transitioning to its new roles and responsibilities under the legislation—establishing a new organization structure, a new regulatory framework with new rules and reporting requirements, which must include time for public input, and a multitude of additional requirements. Recognizing these challenges, the Chairman of the PRC has suggested (and asked for public comments on) that rather than expending resources on extending the application of the old system, the PRC would work with the Service and mailers to implement the new regulatory systems even sooner than the 18 months allotted by the new law. This action could allow the Service to implement new rates sooner under the new regulatory system depending upon when the PRC completes its work and the Service chooses to file new rates. The Service’s decision will not only impact its financial performance and condition, but also the mailing industry and the focus of the PRC. Another key provision of the law that warrants close oversight is the requirement for the Service to develop modern service standards. We are encouraged by the Service’s actions to date to establish a workgroup that includes participants from the mailing industry to review and provide recommendations on service standards and measures. This workgroup is expected to complete their work in September of this year, and the Service is to make its decisions on the new service standards by December 20, 2007. The Service then has 6 months to provide Congress with a plan on how it intends to meet these standards, as well as its strategy for realigning and removing excess capacity from the postal network. We believe this plan is a particularly important opportunity to increase transparency in these areas, particularly given the changes to the Service’s plans for network realignment and the limited information available to the public. We will be reporting this summer on the status and results of the Service’s efforts to realign its mail processing network. Finally, the PRC’s role in developing reporting requirements is critical to enhancing the Service’s transparency regarding its performance results. Congress was particularly mindful in crafting the reform act to ensure that the provisions for additional pricing flexibility were balanced with increased transparency, oversight, and accountability. The new law provides the regulator with increased authority to establish reporting rules and monitor the Service’s compliance with service standards on an annual basis. The successful transformation of the Postal Service will depend heavily upon innovative leadership by the Postmaster General and the Chairman of the PRC, and their ability to work effectively with their employees, employee organizations, the mailing industry, Congress, and the general public. It will be important for all postal stakeholders to take full advantage of the unique opportunities that are currently available by providing input and working together, particularly as challenges and uncertainties will continue to threaten the Service’s financial condition and outlook. Chairman Davis, this concludes my prepared statement. I would be pleased to respond to any questions that you or the Members of the Subcommittee may have. For further information regarding this statement, please contact Katherine Siggerud, Director, Physical Infrastructure Issues, at (202) 512-2834 or at [email protected]. Individuals making key contributions to this statement included Teresa Anderson, Joshua Bartzen, Kenneth John, John Stradling, Jeanette Franzel, Shirley Abel, Scott McNulty, and Kathy Gilhooly. Postal Service submits proposal to Postal Rate Commission (PRC) Requests rate increases effective May 2007. Establishes pricing structure based on mail weights and shapes: Revises old structure which was primarily weight-based. Recognizes that different mail shapes have different processing costs. Gives mailers an opportunity to minimize their rates by altering shape of mail. 2/26/07 PRC issues recommended decision on Service’s proposal Issued after detailed administrative proceeding involving mailers, employee organizations, consumer representatives and competitors. Recommends revisions to many of the rates and rate designs submitted by the Increases rates substantially for some types of mail. Revised rates are intended to more accurately reflect costs and send proper Concurs with shape-based pricing structure and, according to the PRC, the change in rates will still meet the Service’s revenue needs. Anticipated that this would have been the last rate case initiated prior to implementation of the new rate structure established under the reform legislation and explained that its recommended rates are intended to provide a sound foundation for the future. 3/19/07 Postal Service’s Board of Governors issues decision to implement PRC- Implements most rates effective May 14, 2007. Asks PRC to reconsider some rates, most notably those for flat-sized Standard Mail, which is generally advertising and direct mail solicitations (this could lead to further changes in these rates). Delays rate implementation for Periodicals for over 2 months, citing reactions of magazine mailers and the publishing industry’s need to update software. The Forever Stamp will sell at the First-Class one-ounce letter rate, and will continue to be worth the price of a First-Class one-ounce letter even if that price changes. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
When GAO originally placed the U.S. Postal Service's (the Service) transformation efforts and long-term outlook on its high-risk list in early 2001, it was to focus urgent attention on the Service's deteriorating financial situation. Aggressive action was needed, particularly in cutting costs, improving productivity, and enhancing financial transparency. GAO testified several times since 2001 that comprehensive postal reform legislation was needed to address the Service's unsustainable business model, which assumed that increasing mail volume would cover rising costs and mitigate rate increases. This outdated model limited its flexibility and incentives needed to realize sufficient cost savings to offset rising costs, declining First-Class Mail volumes, unfunded obligations, and an expanding delivery network. This limitation threatened the Service's ability to achieve its mission of providing affordable, high-quality universal postal services on a self-financing basis. This testimony will focus on (1) why GAO recently removed the Service's transformation efforts and outlook from GAO's high-risk list, (2) the Service's financial condition in fiscal year 2007, (3) the opportunities and challenges facing the Service, and (4) major issues and areas for congressional oversight. This testimony is based on GAO's past work, review of the postal reform law, and updated information on the Service's financial condition. Key actions by both the Service and Congress have led GAO to remove the Service's transformation efforts and long-term outlook from its high-risk list in January 2007. Specifically, the Service developed a Transformation Plan and achieved billions in cost-savings, improved productivity, downsized its workforce, and improved its financial reporting. Congress enacted a law in 2003 that reduced the Service's annual pension expenses, which enabled it to achieve record net incomes, repay debt, and delay rate increases until January 2006. Finally, the postal reform law enacted in December 2006 provides tools and mechanisms that can be used to address key challenges facing the Service as it moves into a new regulatory and increasingly competitive environment. The two key factors that will affect the Service's financial condition for this fiscal year are the new reform law and new postal rates that go into effect in May. The reform law increases the costs of funding retiree health benefits but provides opportunities to offset some of these cost pressures through efficiency gains and eliminating certain pension payments. For the rest of the year, Service officials do not expect significant changes from its projected expenses and revenues. Other factors, such as costs for fuel or labor resolutions varying from plan, could affect the Service's projected outcome for this fiscal year. Congress's continued oversight of the Service's transformation is critical at this time of significant changes for the Service, Postal Regulatory Commission (PRC), and mailing industry. Also, key to a successful transformation is innovative leadership by the Postmaster General and the PRC Chairman and their ability to work effectively with stakeholders to realize new opportunities provided under the postal reform law. GAO has identified key issues and areas for oversight related to implementing the reform law and new rate-setting structure, as well as other challenges to ensure the Service remains financially sound.
|
Internet-based services using Web 2.0 technology have become increasingly popular. Web 2.0 technologies are a second generation of the World Wide Web as an enabling platform for Web-based communities of interest, collaboration, and interactive services. These technologies include Web logs (known as “blogs”), which allow individuals to respond online to agency notices and other postings; “wikis,” which allow individual users to directly collaborate on the content of Web pages; “podcasting,” which allows users to download audio content; and “mashups,” which are Web sites that combine content from multiple sources. Web 2.0 technologies also include social media services, which allow individuals or groups of individuals to create, organize, edit, comment on, and share content. These include social networking sites (such as Facebook and Twitter) and video-sharing Web sites (such as YouTube). While in the past Internet usage concentrated on sites that provide online shopping opportunities and other services, according to the Nielsen Company, social media-related sites have moved to the forefront. In June 2010, it reported that Internet users worldwide accessed social media sites one out of every 4 1/2 minutes they spent online on average. The use of social networking services now reportedly exceeds Web-based e-mail usage, and the number of American users frequenting online video sites has more than tripled since 2003. The Nielsen Company reported that during the month of April 2010, the average user spent nearly 6 hours on social media-related sites. Facebook is a social networking site that lets users create personal profiles describing themselves and then locate and connect with friends, co-workers, and others who share similar interests or who have common backgrounds. Individual profiles may contain—at the user’s discretion— detailed personal information, including birth date, home address, telephone number, employment history, educational background, and religious beliefs. Facebook also allows any user to establish a “page” to represent an organization (including federal agencies), business, or public figure in order to disseminate information to users who choose to connect with them. These users can leave comments in response to information posted on such a page. Profile information for these users may be made available to the administrators of these pages, depending on settings controlled by the user. According to the Facebook site, Facebook has over 500 million active users who spend more than 700 billion minutes per month on Facebook. Twitter is a social networking site that allows users to share and receive information through short messages that are also known as “tweets.” These messages are no longer than 140 characters in length. Twitter users can establish accounts by providing a limited amount of PII but may elect to provide additional PII if they wish. Users can post messages to their profile pages and reply to other Twitter users’ tweets. Users can “follow” other users as well—i.e., subscribe to their tweets. In March 2011, Twitter reported adding an average of 460,000 new accounts and facilitating the delivery of 140 million tweets every day. YouTube is a video-sharing site that allows users to discover, watch, upload, comment on, and share originally created videos. Similar to Twitter, users can establish accounts on YouTube with only limited amounts of PII, although they may choose to provide more detailed information on their profile page. Users can comment on videos posted on a page either in written responses or by uploading their own videos. According to YouTube, during 2010 more than 13 million hours of video were uploaded. Federal agencies are increasingly using these social media tools to enhance services and interactions with the public. As of April 2011, 23 of 24 major federal agencies had established accounts on Facebook, Twitter, and YouTube. Furthermore, the public increasingly follows the information provided by federal agencies on these services. For example, as of April 2011, the U.S. Department of State had over 72,000 users following its Facebook page; the National Aeronautics and Space Administration (NASA) had over 992,000 Twitter followers; and a video uploaded by NASA on YouTube in December 2010 had over 360,000 views as of April 2011. The Federal Records Act establishes requirements for records management programs in federal agencies. Each federal agency is required to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. The Federal Records Act defines a federal record without respect to format. Records include all books, papers, maps, photographs, machine readable materials, or other documentary materials, regardless of physical form or characteristics, made or received by an agency of the government under federal law or in connection with the transaction of public business and preserved or appropriate for preservation by that agency as evidence of the organization, functions, policies, decisions, procedures, operations, or other activities of the government or because of the informational value of data in them. The agency responsible for providing guidance for adhering to the Federal Records Act is the National Archives and Records Administration (NARA). NARA is responsible for issuing records management guidance; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; providing oversight of agencies’ records management programs; approving the disposition (destruction or preservation) of records; and providing storage facilities for agency records. In October 2010, NARA issued a bulletin to provide guidance to federal agencies in managing records produced when federal agencies use social media platforms for federal business. The bulletin highlighted the requirement for agencies to decide how they will manage records created in social media environments in accordance with applicable federal laws and regulations. As part of this effort, the guidance emphasized the need for active participation of agency records management staff, Web managers, social media managers, information technology staff, privacy and information security staff, and other relevant stakeholders at each federal agency. The primary laws that provide privacy protections for personal information accessed or held by the federal government are the Privacy Act of 1974 and E-Government Act of 2002. These laws describe, among other things, agency responsibilities with regard to protecting PII. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. A system of records is a collection of information about individuals under control of an agency from which information is retrieved by the name of an individual or other identifier. The E-Government Act of 2002 requires agencies to assess the impact of federal information systems on individuals’ privacy. Specifically, the E-Government Act strives to enhance the protection of personal information in government information systems and information collections by requiring agencies to conduct privacy impact assessments (PIA). A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system. Specifically, according to Office of Management and Budget (OMB) guidance, the purpose of a PIA is to (1) ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. In June 2010, OMB issued guidance to federal agencies for protecting privacy when using Web-based technologies (such as social media). The guidance built upon the protections and requirements outlined in the Privacy Act and E-Government Act and called for agencies to develop transparent privacy policies and notices to ensure that agencies provide adequate notice of their use of social media services to the public, and to analyze privacy implications whenever federal agencies choose to use such technologies to engage with the public. The Federal Information Security Management Act of 2002 (FISMA) established a framework designed to ensure the effectiveness of security controls over information resources that support federal operations and assets. According to FISMA, each agency is responsible for, among other things, providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency and information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. Consistent with its statutory responsibilities under FISMA, in August 2009 the National Institute of Standards and Technology (NIST) issued an update to its guidance on recommended security controls for federal information systems and organizations. The NIST guidance directs agencies to select and specify security controls for information systems based on an assessment of the risk to organizational operations and assets, individuals, other organizations, and the nation associated with operation of those systems. According to the guidance, the use of a risk-based approach is applicable not just to the operation of the agency’s internal systems but is also important when an agency is using technology for which its ability to establish security controls may be limited, such as when using a third-party social media service. In July 2010, we testified that while the use of Web 2.0 technologies, including social media technologies, can transform how federal agencies engage the public by allowing citizens to be more involved in the governing process, agency use of such technologies can also present challenges related to records management, privacy, and security. Records Management: We reported that Web 2.0 technologies raised issues concerning the government’s ability to identify and preserve federal records. Agencies may face challenges in assessing whether the information they generate and receive by means of these technologies constitutes federal records. Furthermore, once the need to preserve information as federal records has been established, mechanisms need to be put in place to capture such records and preserve them properly. We stated that proper records retention management needs to take into account NARA record scheduling requirements and federal law, which require that the disposition of all federal records be planned according to an agency schedule or a general records schedule approved by NARA. We highlighted that these requirements may be challenging for agencies because the types of records involved when information is collected via Web 2.0 technologies may not be clear. As previously mentioned, in October 2010, NARA issued further guidance that clarified agency responsibilities in making records determinations. Privacy: We noted, among other things, that agencies faced challenges in ensuring that they are taking appropriate steps to limit the collection and use of personal information made available through social media. We stated that privacy could be compromised if clear limits were not set on how the government uses personal information to which it has access in social networking environments. Social networking sites, such as Facebook, encourage people to provide personal information that they intend to be used only for social purposes. Government agencies that participate in such sites may have access to this information and may need rules on how such information can be used. While such agencies cannot control what information may be captured by social networking sites, they can make determinations about what information they will collect and what to disclose. However, unless rules to guide their decisions are clear, agencies could handle information inconsistently. OMB’s subsequent release of guidance, as previously discussed, clarified agency requirements for such privacy protections. Security: We highlighted that federal government information systems have been targeted by persistent, pervasive, and aggressive threats and that, as a result, personal and agency information needs to be safeguarded from security threats, and that guidance may be needed for employees on how to use social media Web sites properly and how to handle information in the context of social media. Cyber attacks continue to pose a potentially devastating threat to the systems and operations of the federal government. In February 2011, the Director of National Intelligence testified that, in the previous year, there had been a dramatic increase in malicious cyber activity targeting U.S. computers and networks, including a more than tripling of the volume of malicious software since 2009. Further, in March 2011, the Federal Trade Commission (FTC) reached an agreement with Twitter to resolve charges that the company deceived consumers and put their privacy at risk by failing to safeguard their personal information. The FTC alleged that serious lapses in the company’s security allowed hackers to obtain unauthorized administrative control of Twitter and send unauthorized tweets from user accounts, including one tweet, purportedly from President Obama, that offered his more than 150,000 “followers” a chance to win $500 in free gasoline, in exchange for filling out a survey. To resolve the charges, Twitter agreed to establish and maintain a comprehensive information security program that would be assessed by an independent auditor every other year for 10 years. According to a Chief Information Officers (CIO) Council report released in September 2009, as the federal government begins to utilize public social media Web sites, advanced persistent threats may be targeted against these Web sites. In addition, attackers may use social media to collect information and launch attacks against federal information systems. Table 1 summarizes three types of security threats identified by the CIO Council that agencies may face when using commercially provided social media services. The rapid development of social media technologies makes it challenging to keep up with the constantly evolving threats deployed against them and raises the risks associated with government participation in such technologies. Federal agencies have been using social media services to support their individual missions. While Facebook, Twitter, and YouTube offer unique ways for agencies to interact with the public, we identified several distinct ways that federal agencies are using the three social media services. Despite varying features of the three platforms, agency interactions can be broadly categorized by the manner in which information is exchanged with the public, including reposting information already available on an agency Web site, posting original content not available on agency Web sites, soliciting feedback from the public, responding to comments, and linking to non-government Web sites. Figure 1 shows how the 23 agencies use each of these functions. All 23 agencies used social media to re-post information that is also available on an official agency Web site. This information typically included press releases that agencies issue on mission-related topics or posts to an agency’s blog. Each of the three services was used for reposting information by the agencies. Facebook was used to repost information and direct the public to an agency’s official Web site. For example, the Social Security Administration (SSA) posted a notice on its Facebook page that briefly discussed Social Security benefits and provided a link to SSA’s Web site. The same information was also posted on the SSA Web page. Twitter was used to repost information in an abbreviated format, accompanied by a link to an official agency Web page where the full content was available. For example, the Department of the Interior posted a message (or “tweet”) about an order that the Secretary of the Interior had issued and provided a link to the agency’s Web site where the full order was available. YouTube was generally used to provide an alternate means of accessing videos that were available on the agencies’ Web sites. For example, the Department of Defense (DOD) uploaded a video to its YouTube channel— the Pentagon Channel—that described what was going on at the Pentagon during a particular week. The video was also posted on a DOD Web site dedicated to broadcasting military news and information for members of the armed forces. In addition to reposting information, agencies also used social media to post original content that is not available on their Web sites. All 23 agencies used social media to post content not available on the agency’s Web site. Twitter was used most often for this purpose. Facebook was used to post content such as pictures and descriptions of officials on tours or inspections. For example, the Facebook page for the Department of Housing and Urban Development (HUD) featured a picture of the HUD Secretary with President Obama and others while visiting a renovated public housing development during a trip to New Orleans to observe efforts to rebuild the city following Hurricane Katrina. This picture and explanation were not posted to any of HUD’s Web sites. Twitter was often used by agencies to post ephemeral or time-sensitive information. For example, DOD used its Twitter account to encourage its subscribers to sign up to be extras in a movie filming in Washington, D.C. This information and encouragement were not posted on the department’s Web site. YouTube was often used to publish videos of officials discussing topics of interest to the public. An example of this is a video posted to the Department of Energy’s (DOE) YouTube channel on August 2, 2010, in which an official discussed a project for a battery-based energy storage system. Neither this video nor a transcript of the video was found on a DOE Web site. Agencies also used Facebook, Twitter, and YouTube to request comments from the public. This feedback may be received either through the social media service itself or through an agency Web site. Twenty-two of 23 agencies used social media to solicit comments from the public. Of the 22 agencies soliciting feedback, most used Twitter for this purpose. Facebook was generally used for feedback solicitation both when the agency wanted the public to provide comments directly via the social media site and when the agency wanted the public to provide comments through an agency Web site. For example, the Department of Veterans Affairs asked on its Facebook page if the readers liked the redesign of the agency’s main Web site. The post received over 50 comments. Twitter was generally used for feedback solicitation when the agency wanted the public to provide comments through an agency Web site. For example, the Department of Education posted a tweet that requested both teachers and parents to comment on their views of what an effective parent-teacher partnership looks like. The post included a link to the department’s blog on its Web site, where individuals could leave comments. YouTube was also used for feedback solicitation. For example, the Department of Transportation uploaded a video to its YouTube channel asking the public to create and upload videos describing how distracted driving has affected their lives. The video received multiple comments from the public expressing their views on driving and using their cell phones at the same time. Agencies also used social media to respond to comments from the public that were posted on the agencies social media sites to address both administrative and mission-related topics. In these instances, agency responses to public comments were posted to the same social media Web pages where the original comments appeared. Seventeen of the 23 agencies posted responses to public comments on their social media sites. These agencies generally used Facebook or Twitter the most for this activity, with few agencies responding to comments received on their YouTube channels. Agencies used Facebook to respond to comments received on their Facebook pages. For example, HUD posted information on its Facebook page regarding the department’s allocation of funding for rental assistance for non-elderly persons with disabilities with a link to additional information located on the department’s Web site. In response, individuals posted questions and comments, and HUD responded. Twitter was also used by agencies to respond to comments. For example, a tweet posted by the Small Business Administration (SBA) in response to a comment received from a Twitter user stated that the agency was still tweaking the functionality of a system and as a means to provide better customer service asked what e-mail address the individual used. The agencies we reviewed also used social media sites to post links to non-government Web sites (i.e., a Web site whose address does not end in .gov or is not an agency initiative). For example, agencies often provided links to relevant articles located on news media Web sites. All 23 agencies used social media to post links to non-government Web sites. Of the three social media services, Twitter was used the most, while few agencies used YouTube for this purpose. Twitter was often used by agencies to post links to Web sites, as many of the tweets that Twitter subscribers receive contain links to Web sites providing further information. For example, the Secretary of Transportation posted a Twitter message about a non-government organization’s Web site, along with a link to the site. Federal agencies have made mixed progress in developing records management guidance and assessing privacy and security risks associated with their use of commercially provided social media services. Specifically, 12 of the 23 major federal agencies that use Facebook, Twitter, and YouTube have developed and issued guidance to agency officials that outlines (1) processes and policies for how social media records are identified and managed and (2) record-keeping roles and responsibilities. Further, 12 agencies have updated their privacy policies to describe whether they use personal information made available through social media. In addition, eight agencies conducted privacy impact assessments to identify potential risks associated with agency use of the three services. Finally, seven agencies assessed and documented security risks associated with use of the three services and identified mitigating controls to address those risks. Table 2 outlines the extent to which each of the 23 major federal agencies have developed policies and procedures for use of social media. We previously reported that agencies faced challenges in assessing whether the information they generate and receive by means of these services constitutes federal records and establishing mechanisms for capturing and preserving such records. NARA’s October 2010 bulletin on managing social media records highlighted, among other things, the need to ensure that social media policies and procedures articulate clear records management processes and policies and recordkeeping roles and responsibilities. Establishing such guidance can provide a basis for consistently and appropriately categorizing and preserving social media content as records. Twelve of the 23 major federal agencies have taken steps to include records management guidance in their social media policies and procedures. The scope and breadth of the guidance provided varied with each agency. Specifically, eight of the agencies included general statements directing officials responsible for social media content to conform to agency records management policies in identifying records and how to manage them. For example, the Department of Health and Human Services’ social media policy stated that “records management requirements for social media technologies are similar to any other information system and shall be in conformance with existing policy” and provided a Web link to the department’s records management policies. Four agencies provided more specific guidance to officials on what social media content constitutes a federal record at their respective agencies. For example, the Department of Justice issued a policy in August 2009 that included a set of questions department officials are to answer in determining the record status of content posted on agency social media pages. Officials were asked to assess, among other things, (a) whether the agency content was original and not published on other agency Web sites, (b) the duration of time the content would need to be retained, and (c) what agency entity would be responsible for preserving and monitoring the information posted on the social media site. Officials from 10 of the 11 agencies that have not yet documented social media guidance for records management reported taking actions to develop such guidance. Officials from 1 other agency (the National Science Foundation) stated that they intended to prepare guidance but did not report taking any actions to do so. However, agency officials are still likely to need clear direction on how to assess social media records when using new technology. NARA noted in a September 2010 study that records management staff in agencies have been overwhelmed by the speed at which agency employees are adopting new social media technologies and that social media adopters have sometimes ignored records management concerns. Until agencies ensure that records management processes and policies and recordkeeping roles and responsibilities are articulated within social media policies, officials responsible for creating and administering content on agency social media sites may not be making appropriate determinations about social media records. Once the need to preserve information as federal records has been established, mechanisms need to be put in place to capture such records and preserve them properly. We previously testified that establishing such mechanisms may be challenging for agencies because the types of records involved when information is collected via technologies like social media services may not be clear. Officials at agencies that issued records management guidance for social media generally agreed that determining how to preserve social media content as records remains an issue. For example, officials at the Department of the Interior stated that having information with federal record value on non-government systems—such as those of commercial providers of social media—can create challenges in determining who has control over the information and how and when content should be captured for record-keeping. Participants at a roundtable discussion hosted by the National Academy of Public Administration on our behalf also confirmed capturing records as a challenge. One participant suggested that further guidance from NARA to include specific “use cases” as examples would benefit agencies in understanding what approaches can be taken to properly capture and preserve social media records. NARA recently identified the need for further study of potential mechanisms for capturing social media content as records. In its September 2010 study, NARA noted that an agency may not have sufficient control over its content to apply records management principles due to the nature of a third-party site. Furthermore, social media technology can change quickly with functionality being added or changed that could have an impact on records management. As a result, NARA concluded that it should continue to work with other federal agencies to identify best practices for capturing and managing these records. Within its October 2010 bulletin, NARA presented a list of options for how to preserve social media records, such as Web capture tools to create local versions of sites and convert content to other formats. NARA officials stated that activities are underway to provide further assistance to agencies in determining appropriate methods for capturing social media content as federal records. Specifically, in January 2011 NARA initiated a working group in partnership with the Federal Records Council to evaluate Web 2.0 issues regarding records management and develop strategies for capturing social media content as federal records. However, NARA has yet to establish a time frame for issuing new guidance as a result of these efforts. Until guidance is developed that identifies potential mechanisms for capturing social media content as records, potentially important records of government activity may not be appropriately preserved. Social media services often encourage people to provide extensive personal information that may be accessible to other users of those services. Government agencies that participate in such sites may have access to this information and may need to establish controls on how such information can be used. We previously reported that, while such agencies cannot control what information may be captured by social networking sites, they can make determinations about what information they will collect and how it will be used. In June 2010, OMB issued memorandum M-10-23, which specified a variety of actions agencies should take to protect individual privacy whenever they use third-party Web sites and applications to engage with the public. Two key requirements established by OMB were the need for each agency to (1) update its privacy policy in order to provide the public with information on whether the agency uses PII made available through its use of third-party Web sites for any purpose, and (2) conduct privacy impact assessments (PIA) whenever an agency’s use of a third-party Web site makes PII available to the agency. Assessing privacy risks is an important element of conducting a PIA because it helps agency officials determine appropriate privacy protection policies and techniques to implement those policies. A privacy risk analysis should be performed to determine the nature of privacy risks and the resulting impact if corrective actions are not implemented to mitigate those risks. Such analysis can be especially helpful in connection with the use of social media because there is a high likelihood that PII will be made available to the agency. Twelve out of 23 agencies updated their privacy policies to include discussion on the use of personal information made available through social media services. In general, agencies stated that while PII was made available to them through their use of social media services, they did not collect or use the PII. For example, HUD updated the privacy policy on its main Web site, www.hud.gov, to state that “no personally identifiable information (PII) may be requested or collected from social media sites.” As another example, the Department of Energy included a discussion of its policy of removing PII that may be posted on its social media page, noting that officials reserved the right to moderate or remove comments that include PII. Officials from 5 of the 11 agencies that have not updated their privacy policies reported taking actions to do so. Officials from 6 additional agencies (the Departments of Commerce, Health and Human Services, Labor, and Transportation; the National Aeronautics and Space Administration, and the Social Security Administration) stated that they intended to update their privacy policies but did not report taking any actions to do so. Eight agencies conducted PIAs to assess the privacy risks associated with their use of the three services. For example, the Department of Homeland Security (DHS) published a PIA that assessed the risks of the agency’s use of social networking tools, including the potential for agency access to the personal information of individuals interacting with the department on such sites. To mitigate this risk, the department established a policy of prohibiting the collection of personal information by DHS officials using social media sites. Likewise, the Department of Transportation completed a PIA for the use of third-party Web sites and applications, including Facebook, Twitter, and YouTube. The PIA outlined, among other things, what types of PII may potentially be made available to the agency through its use of social media, including the name, current residence, and age of users who may friend, follow, subscribe to or otherwise interact with an official department page on a third-party site. In these instances, the department’s PIA directed officials to avoid capturing and using the PII and to redact any PII contained in screenshots that may be saved for recordkeeping purposes. Officials from 13 agencies had not completed PIAs for their use of any of the social media services, while an additional 2 agencies performed assessments that only evaluated risks associated with using Facebook. Officials from 10 of these agencies reported taking actions to conduct the assessments. Officials from 2 other agencies (the Department of State and the Small Business Administration) stated that they intended to conduct assessments but did not report taking any actions to do so. Officials from the other 3 agencies (the Departments of Agriculture and the Treasury; and the General Services Administration) stated that they did not plan to conduct PIAs because they were not planning to collect personal information provided on their social media sites and, therefore, an assessment was unnecessary. However, OMB’s guidance states that when an agency takes action that causes PII to become accessible to agency officials—such as posting information on a Facebook page that allows the public to comment—PIAs are required. Given that agency officials have access to comments that may contain PII and could collect and use the information for another purpose, it is important that an assessment be conducted, even if there are no plans to save the information to an agency system. Without updating privacy policies and performing and publishing PIAs, agency officials and the public lack assurance that all potential privacy risks have been evaluated and that protections have been identified to mitigate them. Pervasive and sustained cyber attacks continue to pose a potentially devastating threat to the systems and operations of the federal government. As part of managing an effective agencywide information security program to mitigate such threats, FISMA requires that federal agencies conduct periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of agency information and information systems. To help agencies implement such statutory requirements, NIST developed a risk management framework for agencies to follow in developing information security programs. As part of this framework, federal agencies are to assess security risks associated with information systems that process federal agency information and identify security controls that can be used to mitigate the identified risks. In associated guidance, NIST highlighted that using such a risk-based approach is also important in circumstances where an organization is employing information technology beyond its ability to adequately protect essential missions and business functions, such as when using commercially provided social media services. By identifying the potential security threats associated with use of such third-party systems, agencies can establish proper controls and restrictions on agency use. Seven out of 23 agencies performed and documented security risk assessments concerning their use of the three social media services. For example, the Department of Labor outlined the agency’s use of the three tools within one risk assessment, evaluating potential threats and vulnerabilities, and recommended controls to mitigate risks associated with those threats and vulnerabilities. The department identified, among other things, the potential risk of having unauthorized information posted to its social media page by agency officials with social media responsibilities and identified the need for such individuals to receive training on proper use of social media sites. Additionally, a Department of Health and Human Services security document stated that, due to risks associated with use of social media, including the potential for social media sites to be used as a vehicle for transmitting malicious software, the department would block use of social media sites—including Facebook, Twitter, and YouTube—by employees, with specific allowances made for those with documented business needs. According to officials, 16 agencies had not completed and documented assessments for their use of any of the social media services. Officials from 12 of these agencies reported that they were taking actions to conduct security risk assessments but had not yet completed them. Officials from 2 additional agencies (the Department of Commerce and the National Science Foundation) stated that they intended to conduct assessments but did not report taking any actions to do so. Officials at 1 other agency (the Department of State) reported that they did not plan to conduct assessments because their internal policies and procedures did not require them to perform risk assessments. As we previously stated, however, NIST guidance requires the application of the risk management process to social networking uses to establish proper controls and restrictions on agency use. Officials from 1 other agency (the Department of Transportation) reported that they had conducted a security risk assessment but did not document the results. Without such documentation, the agency may lack evidence of the justification and rationale for decisions made based on the risk assessment and, consequently, the assurance that security controls have been implemented to properly address identified security threats. Without conducting and documenting a risk assessment, agency officials cannot ensure that appropriate controls and mitigation measures are in place to address potentially heightened threats associated with social media, including spear phishing and social engineering. Federal agencies are increasingly making use of social media technologies, including Facebook, Twitter, and YouTube, to provide information about agency activities and interact with the public. While the purposes for which agencies use these tools vary, they have the potential to improve the government’s ability to disseminate information, interact with the public, and improve services to citizens. However, the widespread use of social media technologies also introduces risks, and agencies have made mixed progress in establishing appropriate policies and procedures for managing records, protecting the privacy of personal information, and ensuring the security of federal systems and information. Specifically, just over half of the major agencies using social media have established policies and procedures for identifying what content generated by social media is necessary to preserve in order to ensure compliance with the Federal Records Act, and they continue to face challenges in effectively capturing social media content as records. Without clear policies and procedures for properly identifying and managing social media records, potentially important records of government activity may not be appropriately preserved. In addition, most agencies have not updated their privacy policies or assessed the impact their use of social media may have on the protection of personal information from improper collection, disclosure, or use, as called for in recent OMB guidance. Performing PIAs and updating privacy policies can provide individuals with better assurance that all potential privacy risks associated with their personal information have been evaluated and that protections have been identified to mitigate them. Finally, most agencies did not have documented assessments of the security risks that social media can pose to federal information or systems in alignment with FISMA requirements, which could result in the loss of sensitive information or unauthorized access to critical systems supporting the operations of the federal government. Without conducting and documenting a risk assessment, agency officials cannot ensure that appropriate controls and mitigation measures are in place to address potentially heightened threats associated with social media, such as spear phishing and social engineering. To ensure that federal agencies have adequate guidance to determine the appropriate method for preserving federal records generated by content presented on agency social media sites, we recommend that the Archivist of the United States develop guidance on effectively capturing records from social media sites and that this guidance incorporate best practices. We are also making 32 recommendations to 21 of the 23 departments and agencies in our review to improve their development and implementation of policies and procedures for managing and protecting information associated with social media use. Appendix II contains these recommendations. We sent draft copies of this report to the 23 agencies covered by our review, as well as to the National Archives and Records Administration. We received written or e-mail responses from all the agencies. A summary of their comments and our responses, where appropriate, are provided below. In providing written comments on a draft of this report, the Archivist of the United States stated that NARA concurred with the recommendation to develop guidance on effectively capturing records from social media sites and that the agency would incorporate best practices in this guidance. NARA’s comments are reprinted in appendix III. Of the 21 agencies to which we made recommendations, 12 (the Departments of Defense, Education, Energy, Homeland Security, Housing and Urban Development, and Veterans Affairs; the Environmental Protection Agency; the National Aeronautics and Space Administration; the National Science Foundation; the Office of Personnel Management; the Social Security Administration; and the U.S. Agency for International Development) agreed with our recommendations. Two of the 21 agencies (the Departments of Commerce and Health and Human Services) generally agreed with our recommendations but provided qualifying comments: In written comments on a draft of the report, the Secretary of Commerce concurred with our two recommendations but provided qualifying comments about the second. Regarding our recommendation that the department conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats, he stated that the department had a policy in place that requires risk-based assessments to be conducted of social media technologies used by the department in order to determine if mitigating strategies, such as access or usage limitation, are warranted. However, the department did not provide documentation demonstrating that it had completed and documented any of the required risk assessments. The department’s comments are reprinted in appendix IV. In an e-mail response on a draft of the report, a Department of Health and Human Services’ Senior Information Security Officer stated that the department agreed with our recommendation to update its privacy policy. However, the department disagreed with the perceived finding that it had not made progress in conducting a PIA and reported recent efforts to do so. We did not intend to suggest that the department had not taken any steps to develop a PIA, and we updated our report to clarify that the department has taken actions to develop PIAs for its social media use. However, the agency has not yet completed its PIA and thus may lack assurance that all potential privacy risks have been evaluated and that protections have been identified to mitigate them. Three of the 21 agencies (the Departments of Agriculture and State; and the General Services Administration) did not concur with all of the recommendations made to them: In written comments on a draft of the report, the Department of Agriculture’s CIO disagreed with our recommendation that the department conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. Specifically, the CIO stated that the department had completed a Privacy Threshold Analysis that indicated that a PIA was not required since the department did not solicit, collect, or retain PII through its social media sites. However, as indicated in our report, OMB’s guidance states that when an agency takes action that causes PII to become accessible to agency officials—such as posting information on a Facebook page that allows the public to comment—PIAs are required. Without a PIA, the department may lack assurance that all potential privacy risks have been evaluated and that protections have been identified to mitigate them. The Department of Agriculture’s comments are reprinted in appendix V. In written comments on a draft of the report, the Department of State’s Chief Financial Officer concurred with one of our two recommendations, but not the other. Specifically, regarding our recommendation that the department conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats, he stated that the department shared GAO’s concern regarding the security of information in commercially provided social media but that since the department had already determined that its use of social media sites would be limited to providing the public with “low-impact” information, no further risk assessment or certification and accreditation was required. He further stated that the impact on confidentiality, integrity, and availability of systems with such non- structured data could only be determined by policy, not by risk analysis and, therefore, a security risk assessment was not warranted. However, although limiting the type of information that is processed on third-party systems can be an effective mitigating security control, without conducting and documenting a risk assessment, agency officials cannot ensure that policies and mitigation measures effectively address potentially heightened threats associated with social media, including spear phishing and social engineering. The Department of State’s comments are reprinted in appendix VI. In written comments on a draft of the report, the Administrator of the General Services Administration partially agreed with our two recommendations. Regarding our recommendation that the agency update its privacy policies to describe whether PII made available through its use of social media services is collected and used, the Administrator noted that the agency was updating its privacy directive to describe the agency’s practices for handling PII made available through the use of social media. Accordingly, we have updated our report to indicate that the agency has taken actions to update its privacy policies for its use of social media. Regarding our recommendation that the agency conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them, the Administrator stated that no PII is sought by or provided to GSA as a result of the agency’s use of Facebook, YouTube, and Twitter and, therefore, the agency determined that conducting a PIA was unnecessary. However, as indicated in our report, OMB’s guidance states that when an agency takes action that causes PII to become accessible to agency officials—such as posting information on a Facebook page that allows the public to comment—PIAs are required. Without a PIA, the department may lack assurance that all potential privacy risks have been evaluated and that protections have been identified to mitigate them. The General Services Administration’s comments are reprinted in appendix VII. Four of the 21 agencies did not comment on the recommendations addressed to them. Specifically, the Departments of Labor and Transportation reported that they did not have any comments and the Department of the Treasury and Small Business Administration only provided technical comments, which we addressed in the final report as appropriate. In cases where these 21 agencies also provided technical comments, we have addressed them in the final report as appropriate. Agencies also provided with their comments information regarding actions completed or underway to address our findings and recommendations and we updated our report to recognize those efforts. Additional written comments are reprinted in appendices VIII through XVII. We also received e-mail responses from the 2 agencies to which we did not make recommendations. Specifically, the Department of the Interior provided technical comments via e-mail and the Department of Justice stated that it did not have comments on the draft of this report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to other interested congressional committees, Secretaries of the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Labor, State, Transportation, the Treasury, and Veterans Affairs; the Attorney General; the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and U.S. Agency for International Development; the Commissioner of the Social Security Administration; the Directors of the National Science Foundation and Office of Personnel Management; and the Archivist of the United States. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XVIII. Our objectives were to: describe how agencies are currently using commercially provided social determine the extent to which federal agencies have developed and implemented policies and procedures for managing and protecting information associated with the use of commercially provided social media services. To address our first objective, we examined the headquarters-level Facebook pages, Twitter accounts, and YouTube channels associated with each of the 24 major federal agencies covered by the Chief Financial Officers Act to describe the types of information agencies disseminated via the services and the nature of their interactions with the public. We selected these three services because of their widespread use within the federal government (23 out of 24 major agencies use each of the services) as well as their broad popularity with the public. We reviewed content on the social media pages, including agency posts as well as comments provided by the public, from July 2010 through January 2011. We categorized agency use based on types of information found on their social media pages. These categories were (1) reposting information available on agency Web sites; (2) posting content not available on agency Web sites; (3) soliciting comments; (4) responding to comments on posted content; and (5) providing links to non-government Web sites. Each agency social media page was reviewed by an analyst to determine whether information had been posted that fell into one of the five categories. Each identified example was corroborated by a second analyst. In the event no examples were identified for an agency in a specific category by the first analyst, the second analyst conducted an additional independent review of agency posts to confirm that none existed. To address our second objective, we reviewed pertinent records management, privacy, and security policies, procedures, guidance, and risk assessments in place at each of the 23 federal agencies and compared them to relevant federal records management, privacy, and security laws, regulations, and guidance. These included the Federal Records Act, the Privacy Act of 1974, the E-Government Act of 2002, the Federal Information Security Management Act of 2002 (FISMA), as well as guidance from the National Archives and Records Administration (NARA), Office of Management and Budget (OMB), and National Institute of Standards and Technology (NIST). We interviewed officials at each of these agencies to discuss recent efforts to oversee the development of social media policies and procedures and assess risks. We also reviewed relevant reports and studies to identify records management, privacy, and security risks associated with social media use by federal agencies. We interviewed officials from OMB, NARA, and NIST, and members of the Chief Information Officer Council to develop further understanding of federal agency requirements for properly managing and protecting information associated with social media use. Further, we coordinated with the National Academy of Public Administration, which hosted a roundtable discussion on our behalf where views on these issues were solicited from federal agency officials involved in agency use of social media. Finally, we interviewed representatives of Facebook, Twitter, and YouTube to discuss records management, privacy, and security issues and their current and planned approaches regarding interactions with federal agencies. We conducted this performance audit from July 2010 to June 2011 in the Washington, D.C., area, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Secretary of Agriculture take the following action: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Secretary of Commerce take the following two actions: Update privacy policies to describe whether PII made available through use of social media services is collected and used. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Secretary of Defense take the following action: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Secretary of Education take the following action: Update privacy policies to describe whether PII made available through use of social media services is collected and used. To ensure that appropriate security measures are in place when commercially provided social media services are used, we recommend that the Secretary of Energy take the following action: Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Secretary of Health and Human Services take the following action: Update privacy policies to describe whether PII made available through use of social media services is collected and used. To ensure that appropriate security measures are in place when commercially provided social media services are used, we recommend that the Secretary of Homeland Security take the following action: Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate security measures are in place when commercially provided social media services are used, we recommend that the Secretary of Housing and Urban Development take the following action: Conduct and document a security risk assessment to assess security threats associated with agency use of Twitter and YouTube and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Secretary of Labor take the following action: Update privacy policies to describe whether PII made available through use of social media services is collected and used. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Secretary of State take the following two actions: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of Twitter and YouTube and identifies protections to address them. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Secretary of Transportation take the following two actions: Update privacy policies to describe whether PII made available through use of social media services is collected and used. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Secretary of the Treasury take the following action: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. To ensure that appropriate records management and privacy measures are in place when commercially provided social media services are used, we recommend that the Secretary of Veterans Affairs take the following two actions: Add records management guidance to agency social media policies that describes records management processes and policies and recordkeeping roles and responsibilities. Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Administrator of the Environmental Protection Agency take the following two actions: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Administrator of the General Services Administration take the following two actions: Update privacy policies to describe whether PII made available through use of social media services is collected and used. Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Administrator of the National Aeronautics and Space Administration take the following three actions: Update privacy policies to describe whether PII made available through use of social media services is collected and used. Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate records management and security measures are in place when commercially provided social media services are used, we recommend that the Director of the National Science Foundation take the following two actions: Add records management guidance to agency social media policies that describes records management processes and policies and recordkeeping roles and responsibilities. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy and security measures are in place when commercially provided social media services are used, we recommend that the Director of the Office of Personnel Management take the following two actions: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Administrator of the Small Business Administration take the following action: Conduct and document a privacy impact assessment that evaluates potential privacy risks associated with agency use of social media services and identifies protections to address them. To ensure that appropriate privacy measures are in place when commercially provided social media services are used, we recommend that the Commissioner of the Social Security Administration take the following action: Update privacy policies to describe whether PII made available through use of social media services is collected and used. To ensure that appropriate records management and security measures are in place when commercially provided social media services are used, we recommend that the Administrator of the U.S. Agency for International Development take the following two actions: Add records management guidance to agency social media policies that describes records management processes and policies and recordkeeping roles and responsibilities. Conduct and document a security risk assessment to assess security threats associated with agency use of commercially provided social media services and identify security controls that can be used to mitigate the identified threats. The following are GAO’s comments to the U.S. Department of Commerce’s letter dated May 27, 2011. 1. The department did not provide documentation demonstrating that it had completed and documented any of the required risk assessments. The following are GAO’s comments to the U.S. Department of Agriculture’s letter dated May 27, 2011. 1. After reviewing additional documentation and comments provided by department representatives, we updated our report to indicate that the department asserted that it is taking actions to develop records management guidance for social media use, although it has not yet been completed. We have not evaluated these actions. 2. After reviewing the updated privacy policy on the Department’s Web site, we agree that the agency has met the requirement, and we have modified table 2 in the final report to reflect that the department has updated its policy. 3. We believe that a PIA is required. As indicated in our report, OMB’s guidance states that when an agency takes action that causes PII to become accessible to agency officials—such as posting information on a Facebook page that allows the public to comment—PIAs are required. Without a PIA, the department may lack assurance that all potential privacy risks have been evaluated and that protections have been identified to mitigate them. The following are GAO’s comments to the Department of State’s letter dated May 31, 2011. 1. After reviewing additional comments provided by department representatives, we updated our report to indicate that the department has plans to develop a PIA for its use of YouTube and Twitter. 2. We believe that conducting and documenting a risk assessment is necessary. Although limiting the type of information that is processed on third-party systems can be an effective mitigating security control, without conducting and documenting a risk assessment, agency officials cannot ensure that appropriate controls and mitigation measures are in place to address potentially heightened threats associated with social media, including spear phishing and social engineering. The following are GAO’s comments to the General Services Administration’s letter dated June 3, 2011. The following are GAO’s comments to the Department of Defense’s letter dated May 27, 2011. 1. We updated our report to indicate that the department asserted that it is taking actions to develop a PIA for its social media use, although it has not yet been finalized. We have not evaluated these actions. 2. After reviewing the additional documentation provided, we agree that the department met the requirement of conducting and documenting a security risk assessment. We modified the report, as appropriate, and removed the recommendation. The following are GAO’s comments to the Department of Education’s letter dated May 25, 2011. 1. After reviewing the privacy policy on the department’s Web site, we updated our report to indicate that the department asserted that it is taking actions to develop privacy policies addressing the agency’s use of PII made available through social media services. We confirmed these actions. 2. After reviewing additional efforts stated by the department, we updated our report to indicate that the department asserted that it is taking actions to develop records management guidance for social media use, although such guidance has not yet been finalized. We have not evaluated these actions. 3. After reviewing additional efforts stated by the department, we updated our report to indicate that the department asserted that it is taking actions to conduct and document a PIA related to its use of social media, although it has not yet been finalized. We have not evaluated these actions. 4. After reviewing additional efforts stated by the department, we updated our report to indicate that the department asserted that it is taking actions to conduct and document a security risk assessment related to its use of social media, although the assessment has not yet been finalized. We have not evaluated these actions. The following are GAO’s comments to the Department of Homeland Security’s letter dated June 6, 2011. 1. After reviewing additional efforts stated by the department, we updated our report to indicate that the department asserted that it is taking actions to conduct and document a security risk assessment related to its use of social media, although the assessment has not yet been finalized. We have not evaluated these actions. The following are GAO’s comments to the Department of Housing and Urban Development’s letter dated May 27, 2011. 1. After reviewing the additional documentation provided, we updated our report to indicate that the department asserted that it is taking actions to conduct and document a security risk assessment related to its use of social media. We confirmed these actions. The following are GAO’s comments to the Department of Veterans Affairs’ letter dated May 31, 2011. 1. After reviewing additional comments provided by department representatives, we updated our report to indicate that the department asserted that it is taking actions to develop records management guidance for social media use, although the guidance has not yet been finalized. We have not evaluated these actions. 2. After reviewing additional comments provided by department representatives, we updated our report to indicate that the department asserted that it is taking actions to develop a PIA for its social media use, although the PIA has not yet been finalized. We have not evaluated these actions. The following are GAO’s comments to the Environmental Protection Agency’s letter dated May 25, 2011. 1. After reviewing additional comments provided by agency representatives, we updated our report to indicate that the agency asserted that it is taking actions to develop a PIA for its social media use, although the PIA has not yet been finalized. We have not evaluated these actions. 2. After reviewing additional comments provided by agency representatives, we updated our report to indicate that the agency asserted taking actions to develop a security risk assessment for social media use, although the assessment has not yet been finalized. We have not evaluated these actions. The following are GAO’s comments to the National Aeronautics and Space Administration’s letter dated May 31, 2011. 1. After reviewing additional efforts stated by the agency, we updated our report to indicate that the agency has plans to develop privacy policies addressing the agency’s use of PII made available through social media services. 2. After reviewing additional comments provided by agency representatives, we updated our report to indicate that the agency asserted that it is taking actions to develop a PIA for its social media use, although the PIA has not yet been finalized. We have not evaluated these actions. 3. After reviewing additional comments provided by agency representatives, we updated our report to indicate that the agency asserted that it is taking actions to develop a security risk assessment for social media use, although the assessment has not yet been finalized. We have not evaluated these actions. The following are GAO’s comments to the Office of Personnel Management’s letter dated May 24, 2011. 1. After reviewing additional comments and materials provided by agency representatives, we updated our report to indicate that the agency asserted that it is taking actions to develop both a PIA and a security risk assessment for its social media use. We have not evaluated these actions. The following are GAO’s comments to the Social Security Administration’s letter dated May 26, 2011. 1. After reviewing additional comments stated by the agency, we updated our report to indicate that the agency has plans to develop privacy policies addressing the agency’s use of PII made available through social media services. The following are GAO’s comments to the U.S. Agency for International Development’s letter received on May 27, 2011. 1. After reviewing additional comments provided by agency representatives, we updated our report to indicate that the agency asserted that it is taking actions to develop records management guidance for social media use. We have not evaluated these actions. 2. After reviewing additional comments provided by agency representatives, we updated our report to indicate that the agency asserted that it is taking actions to develop a security risk assessment for social media use, although the assessment has not yet been finalized. We have not evaluated these actions. In addition to the contact above, John de Ferrari, Assistant Director; Sher`rie Bacon; Marisol Cruz; Jennifer Franks; Fatima Jahan; Nicole Jarvis; Nick Marinos; Lee McCracken; Thomas Murphy; Constantine Papanastasiou; David Plocher; Dana Pon; Matthew Strain; and Jeffrey Woodward made key contributions to this report.
|
Federal agencies increasingly use recently developed Internet technologies that allow individuals or groups to create, organize, comment on, and share online content. The use of these social media services-- including popular Web sites like Facebook, Twitter, and YouTube-- has been endorsed by President Obama and provides opportunities for agencies to more readily share information with and solicit feedback from the public. However, these services may also pose risks to the adequate protection of both personal and government information. GAO was asked to (1) describe how federal agencies are currently using commercially provided social media services and (2) determine the extent to which agencies have developed and implemented policies and procedures for managing and protecting information associated with this use. To do this, GAO examined the headquarters-level Facebook pages, Twitter accounts, and YouTube channels of 24 major federal agencies; reviewed pertinent policies, procedures, and guidance; and interviewed officials involved in agency use of social media.. Federal agencies have been adapting commercially provided social media technologies to support their missions. Specifically, GAO identified several distinct ways that 23 of 24 major agencies are using Facebook, Twitter, and YouTube. These include reposting information available on official agency Web sites, posting information not otherwise available on agency Web sites, soliciting comments from the public, responding to comments on posted content, and providing links to non-government sites. For example, agencies used Facebook to post pictures or descriptions of the activities of agency officials and to interact with the public. Agencies used Twitter to provide information in an abbreviated format and to direct the public back to official agency sites. YouTube was used to provide alternate means of accessing videos available on official agency sites, share videos of agency officials discussing topics of interest, or to solicit feedback from the public. The use of these services can pose challenges in managing and identifying records, protecting personal information, and ensuring the security of federal information and systems. However, the 23 major agencies that GAO identified as using social media have made mixed progress in developing and implementing policies and procedures to address these challenges: (1) Records management: 12 of the 23 agencies have developed and issued guidance that outlines processes and policies for identifying and managing records generated by their use of social media and record-keeping roles and responsibilities. (2) Privacy: 12 agencies have updated their privacy policies to describe whether they use personal information made available through social media, and 8 conducted and documented privacy impact assessments to identify potential privacy risks that may exist in using social media given the likelihood that personal information will be made available to the agency by the public. (3) Security: 7 agencies identified and documented security risks (such as the potential for an attacker to use social media to collect information and launch attacks against federal information systems) and mitigating controls associated with their use of social media. In several cases, agencies reported having policies in development to address these issues. In other cases, agencies reported that there was no need to have policies or procedures that specifically address the use of social media, since these are addressed in existing policies. However, social media technologies present unique challenges and risks, and without establishing guidance and assessing risks specific to social media, agencies cannot be assured that they are adequately meeting their responsibilities to manage and preserve federal records, protect the privacy of personal information, and secure federal systems and information against threats. GAO recommends that agencies ensure that appropriate records management, privacy, and security measures are in place. Most of the agencies agreed with GAO's recommendations. Three agencies did not agree with recommendations made to them; GAO maintains that the actions are necessary.
|
Symptoms of temporomandibular joint and muscle disorders vary but typically include pain of the jaw joint and surrounding muscles. Other symptoms include limited or no movement of the jaw joint, clicking or grating in the jaw joint when opening or closing the mouth, headaches, and shoulder or back pain. According to the National Institutes of Health, most patients’ symptoms improve significantly or disappear within weeks or months, while a smaller number of patients have significant long-term symptoms. Trauma to the jaw or jaw joint can contribute to temporomandibular joint and muscle disorders in some instances; however, the causes of most cases of temporomandibular joint and muscle disorders are unknown. There are a range of treatments available for temporomandibular joint and muscle disorders; some are conservative and temporary while others are irreversible. Experts recommend that the most conservative treatment be used to relieve symptoms before irreversible treatments are used. Conservative treatments can include taking pain medications, using a splint or bite guard, applying ice packs, or eating soft food. Irreversible treatments include grinding down the teeth to change a person’s bite or surgical procedures such as replacing all or a portion of the jaw joint with TMJ implants. Total TMJ implants replace both the upper (articular fossa) and lower (condyle) portions of the jaw joint, whereas partial TMJ implants replace only the upper portion. (See fig. 1.) TMJ implants may improve the function of the jaw joint, however, pain, which is a chief complaint of many who suffer from temporomandibular joint and muscle disorders, is not always relieved. Medical devices, including TMJ implants, are regulated by FDA, through its Center for Devices and Radiological Health. TMJ implants are classified as Class III devices. Class III devices include those that present a significant risk of illness or injury to the patient. Prior to the marketing of most Class III devices, FDA must approve a PMA application. The PMA review requires sufficient and valid scientific evidence to assure that a medical device is safe and effective for its intended use. In making this determination, FDA officials—including FDA staff known as the review team and two levels of FDA management— must consider if there is reasonable assurance that the probable benefits to health of the device outweigh any probable risks. They must also consider whether the device is effective by evaluating data provided by the sponsor for “clinically significant results.” The review team examines clinical studies of the device involving human subjects, engineering testing performed on the device, and other aspects of the PMA application such as device labeling. It may also obtain input from one of its external advisory boards—in the case of TMJ implants, its dental products panel—for its evaluation and recommendation regarding approval. If the review team has concerns about the PMA application it contacts the sponsor for more information. In some cases the review team may determine that it needs significant additional information to complete the scientific review, in which case it issues a deficiency letter to the device sponsor indicating the information that is needed. The sponsor can respond by submitting an amendment to the original application. The review team can continue to issue deficiency letters and receive amendments from sponsors until it determines that it has the information needed to make a recommendation regarding approval. Once the PMA review is complete, the review team makes a recommendation regarding approval. This recommendation is subject to review by the two levels of FDA management. Along with the recommendation, information provided by the sponsor and the review team’s assessment of the PMA application, including the individual reviews, such as engineering, clinical, and statistical reviews, and a team leader summary, are forwarded. The review team sends this package to the first level of management. If this level of management agrees with the review team’s recommendation, the review package is sent to the second level for final review. The second level of management may concur or override the decision made at the previous management level. Management can make a recommendation regarding approval even if some concerns regarding the PMA remain unaddressed; however, a device can only be approved for marketing if FDA concludes that its benefits outweigh its risks. If a member of the review team or the first level of FDA management disagrees with the final decision, an internal “respectful disagreement memo” can be written indicating the reason for the disagreement. FDA decisions regarding approval of devices can take four forms: (1) issuing an order approving the application, which allows the sponsor to begin marketing the device; (2) sending the sponsor an “approvable” letter indicating that the sponsor needs to provide more information; (3) issuing a “not approvable” letter informing the sponsor of the application’s weaknesses; or (4) issuing an order denying approval of the application. Once a device has been approved, the sponsor must comply with postmarket regulations and restrictions that apply to the device. FDA may also impose postmarket approval or condition of approval requirements that apply specifically to the device that is the subject of the PMA. Conditions of approval can include requirements such as the continuation of a clinical study to collect additional data. Some conditions of approval do not expire, such as reporting adverse events and submitting annual reports, including a summary of all changes to the device. Others are time-limited, such as continuing a clinical study for a specified number of years after the approval of a device. In their review of the four PMA applications, FDA officials raised concerns that were similar for all four devices. FDA addressed many concerns raised in the approval process by obtaining additional information from sponsors to clarify and supplement data contained in their PMA applications. It also approved all four devices but required sponsors to comply with conditions of approval. However, some concerns were left unaddressed upon approval. In addition, the FDA review team and two levels of FDA management did not agree on the assessment of the safety and effectiveness of the two TMJ Implants, Inc., devices. Ultimately, according to FDA management, the primary justification for approving these devices was that the potential benefit to the patients outweighed the concerns raised and there did not appear to be a prohibitory risk associated with the devices. We grouped the concerns FDA raised during the PMA process into four main categories: study protocol, patient follow-up, engineering testing, and other concerns. These categories and types of concerns are shown in table 1. From FDA’s review of the PMA applications, we observed similar concerns across most PMA applications. For example: All four PMA applications had incomplete or insufficient data to draw conclusions from the clinical studies. For example, FDA officials were concerned that because the Walter Lorenz clinical study was primarily conducted at one site, the physician at this site might have more expertise in implanting the device than a typical physician, potentially biasing the results. Officials were uncertain if equally favorable results would be obtained at other sites when the implant procedure was performed by less- experienced physicians. All four PMA applications had deficient patient follow-up information, which prevented a satisfactory evaluation of the study results, such as improvement in patient symptoms and survivability of the implant. In three of the four PMA applications, concerns were raised about the lack of information specifying the clinical diagnosis of the patients included in their clinical studies. This made it difficult for the review team to interpret the types of clinical conditions for which the devices are appropriate. In three of the four PMA applications, concerns existed regarding inaccurate measurement of data. For example, neither TMJ Concepts’s nor TMJ Implants, Inc.’s, total implant clinical data followed the same cohort of patients over time. This made it difficult for the review team to determine whether the device produced improvements in patients. The clinical data for TMJ Implants, Inc.’s, partial implant were compromised because medications used by patients were not documented in the study. Any use of medications could have affected patient outcomes. In three of the four PMA applications, the review team indicated that additional implant wear and fatigue testing needed to be conducted. For example, the team wanted TMJ Implants, Inc., (total implant) to conduct wear debris analysis. This analysis could help determine if material wears off the implant over time, which could be absorbed into the patient’s body. FDA addressed the concerns it raised in its review of the PMA applications in two ways: (1) by communicating with sponsors and collecting additional information from them and (2) by approving the devices with conditions. FDA addressed many of its concerns by clarifying and collecting information for sponsors’ PMA applications, before approving the devices. For example, FDA officials met with representatives of TMJ Concepts and TMJ Implants, Inc., (partial implant) to discuss concerns, such as unsupported indications for use of the device and inconsistent patient follow-up in the clinical studies. In addition, in many instances throughout the review process, FDA officials wrote the sponsors— highlighting problems with the applications—and reviewed their written responses. For example, FDA sent e-mails to Walter Lorenz regarding concerns related to the microbiology, packaging, and shelf life of its device. Walter Lorenz replied to FDA’s questions and requests for information and these concerns were addressed. Correspondence between FDA officials and sponsors often continued for at least 3 months and in most cases longer until concerns were addressed. The second manner in which FDA addressed concerns was by approving the four TMJ implants with certain conditions. A condition of approval common to all four TMJ implants included the requirement that a postmarket study be conducted, which would collect patient data for at least 3 years. This condition of approval addressed FDA’s concerns regarding study protocol and patient follow-up. Other conditions of approval addressed concerns related to a lack of patient history data and inadequate wear testing, among others. TMJ Concepts and TMJ Implants, Inc., (total implant) were required to include patient history data in their postmarket studies. Further, TMJ Concepts and TMJ Implants, Inc., (partial implant) were required to conduct wear analysis in order to address concerns related to inadequate wear testing. While FDA addressed the majority of concerns for each implant, we identified some concerns that remained unaddressed—concerns that were not offset or countered by a condition of approval or by FDA correspondence with the sponsor—upon approval. FDA officials examined these unaddressed concerns during the PMA process. However, they determined that the probable benefits of the devices outweighed the probable risks and therefore approved them. The unaddressed concerns for the devices were as follows and are expanded upon in appendix I: TMJ Concepts: The unaddressed concerns related to inadequate and inaccurate study results. For example, FDA officials indicated that data for implants on the right and left sides of the jaw should have been analyzed separately, but the data collected did not allow for this type of analysis. TMJ Implants, Inc. (total implant): The unaddressed concerns related to the category of other concerns—unaddressed microbiology, packaging, and shelf-life issues. For example, there was a concern regarding the procedures used for implants that will be shipped multiple times, which could occur if a physician shipped an unused implant back to the sponsor. TMJ Implants, Inc. (partial implant): The majority of the unaddressed concerns related to inadequate and inaccurate study results and lack of patient history data. For example, there were concerns that the indications for use the sponsor cited in the device labeling were not supported by the clinical study. In addition, information about patients’ treatment history was not included in the study, so it was unknown whether patients tried more conservative treatments before receiving the device. The remaining unaddressed concerns related to other topics—unaddressed microbiology, packaging, and shelf-life issues and outstanding manufacturing inspection matters. Walter Lorenz: The unaddressed concern related to lack of patient history data, specifically that the sponsor generalized the clinical study results to all patients, even though patients in the study had varying clinical histories. Although FDA’s review team and FDA management agreed that the TMJ Concepts and Walter Lorenz implants should be approved with conditions, there was disagreement among the review team and the two levels of management related to the approval of both TMJ Implants, Inc., devices. The review team recommended that the TMJ Implants, Inc., (total implant) application be considered not approvable. The team had concerns because it felt that the enrollment in the sponsor’s clinical study was too small to draw significant conclusions related to the safety and effectiveness of the device. In addition, the review team believed the indications for use of the device were unsupported. However, the first level of FDA management recommended that the device be approved because it has a role in the treatment of TMJ and muscle disorders. The second level of management agreed with this recommendation. In its approval decision, FDA management acknowledged that there were concerns about the quality and quantity of clinical data provided by the sponsor. However, it stated that either good engineering data or good clinical data was acceptable to approve a device—not necessarily both—and that it deemed the engineering data for the TMJ Implants, Inc., total implant to be satisfactory. Further, FDA management indicated that the clinical data were not expected to be of high quality because the sponsor was a small manufacturer, the data available at the time of approval did not indicate an extraordinary problem with the implanted devices, and the data provided appeared consistent and favorable. The total implant was approved with conditions to address the FDA review team’s concerns mentioned above. There was also conflict regarding the decision to approve the TMJ Implants, Inc., (partial implant) application. Although the second level of management ultimately approved the device for marketing with conditions, both the FDA review team and first level of management found that there was insufficient data to assure that the device was safe and effective. The review team recommended that the device be considered not approvable. The first level of management agreed with this recommendation for the following reasons: The data were limited due to lack of patient follow-up. For example, the group of patients with 2-year and 3-year follow-up data in the sponsor’s clinical study was too small to draw significant conclusions about the device. Of approximately 100 patients with implants, only 29 completed the 24-month follow-up. Only 11 patients completed the intended 36-month follow-up. Outstanding concerns existed related to (1) questionable conduct by the sponsor in classifying and reporting adverse events, (2) lack of engineering testing to determine the long-term effect of the partial TMJ implant on the natural condyle, (3) unsupported indications for use of the device, and (4) lack of data on patients’ clinical and treatment history. While the second level of management recognized and agreed with the scientific concerns that had been raised, the sponsor was sent an approvable letter requiring minor application changes, such as revised device labeling, and the device was eventually approved. An internal memo outlining the second level of management’s approval decision stated that there was a compelling argument in favor of approving the device. It argued that there appeared to be a small group of patients, although poorly defined, for whom the device seemed to provide an option for relief of chronic pain. In addition, it noted that there did not appear to be a prohibitory risk associated with the device in patients who are appropriately educated about all treatment alternatives, their disorder, and the device, and this information is provided in the implant’s labeling. However, the approval memo also stated that the decision to approve the partial implant did not imply that the previous concerns raised by the review team and first level of management related to the inadequacy of the data were reversed. Of these concerns raised, those related to engineering testing on the device’s effect on the natural condyle were addressed through conditions of approval; the others remained unaddressed. Upon the approval of the partial implant, two individuals—a member of the review team and an official from the first level of FDA management— wrote “respectful disagreement memos.” Their memos indicated that they did not agree with the second level of management’s decision to approve the TMJ implants, Inc., (partial implant) application for marketing. These memos outlined concerns raised during the PMA process related to the safety and effectiveness of the device. The concerns highlighted in these memos were that (1) lack of patient follow-up in the clinical study potentially biased the results, and consequently, the sponsor’s claim that the implant resulted in decreased patient pain was unsupported, (2) the clinical study protocol lacked scientific rigor, and (3) outstanding questions remained related to the indications for using the device. In addition, a member of the review team told us that the conditions of approval did not mitigate the concerns she highlighted in her respectful disagreement memo. In order to evaluate how the sponsors complied with the conditions of approval, FDA received and reviewed the majority of the required annual reports from TMJ implant sponsors. However, the review team had not received most of the required annual reports from one sponsor. Of the annual reports the review team evaluated, some of them were incomplete and FDA required sponsors to take additional actions to ensure compliance with conditions of approval. In addition, the FDA review team had concerns about one sponsor’s—TMJ Implants, Inc.—annual reports. FDA found that these reports lacked sufficient information that prevented them from monitoring safety and effectiveness. This eventually led FDA to investigate the sponsor, resulting in the subsequent filing of an administrative complaint for civil monetary penalties for the company’s failure to file certain adverse event reports with FDA. FDA received and reviewed all required annual reports for TMJ Implants, Inc., total and partial implants between 2002 and 2006 and the Walter Lorenz implant in 2006. However, the review team was missing five of seven required annual reports between 2000 and 2006 from TMJ Concepts. It was not until we requested to review these reports that FDA contacted the sponsor to obtain the missing information. In addition, FDA officials told us that they are developing an improved postmarket surveillance effort to assist sponsors with annual report submission. As part of this effort, FDA recently issued draft guidance on October 26, 2006, which outlines FDA’s recommendations for submitting annual reports. Though many annual reports were missing from TMJ Concepts, FDA was able to review the two annual reports submitted by the sponsor in 2000 and 2004. For both reports, TMJ Concepts included information related to a number of conditions of approval, such as providing data on its postmarket study and including a patient quality of life question in that study. In 2000, the sponsor did not comply with the condition of approval to separate data by patients’ clinical histories, but did complete this in its 2004 annual report. Therefore, in 2004, TMJ Concepts addressed all conditions of approval except one—submitting annual reports each year. Although all conditions of approval were not met and FDA was not able to review 5 years of annual reports, FDA found that the 2000 and 2004 annual reports provided adequate data and no additional information was required of the sponsor for those two reports. FDA evaluated information contained in the 13 annual reports it received and found that 7 reports—6 from TMJ Implants, Inc., (3 for the total joint implant and 3 for the partial joint implant) and 1 from Walter Lorenz—did not provide sufficient information to assess their compliance with conditions of approval. For 1 of the 7 annual reports, FDA directed TMJ Implants, Inc., to submit new information about changes to the approved labeling and to the manufacturing processes for its total implant. FDA sent deficiency letters to the sponsors regarding the other 6 annual reports. These deficiency letters required the sponsors to address questions regarding the lack of certain data that relate to the safety and effectiveness of the devices, including patient history, patient follow-up, and adverse events. For example, in its 2006 annual report, Walter Lorenz was required to submit data on its postmarket clinical study. During the review of these data, the FDA review team identified concerns about data that were included in the report and sent a deficiency letter to the sponsor to resolve this issue. FDA officials discussed the deficiency letter with the sponsor and are waiting for a response. FDA took further steps to obtain compliance from TMJ Implants, Inc., which had not responded adequately to FDA’s 2002 deficiency letter requesting additional information, following receipt of the sponsor’s annual reports for its total and partial TMJ implants. Specifically, in 2002 FDA indicated that TMJ Implants, Inc. had not followed up with the required number of patients during its postmarket study. Also, the sponsor was not submitting adverse events, which it described in its annual reports, to FDA’s Manufacturer and User Facility Device Experience Database (MAUDE). The sponsor reported that the reason for the implant removals was not specifically due to the failure of the implant and therefore concluded that they did not need to be reported as adverse events. However, after reviewing the 2003 annual reports where there was still a lack of adverse event reporting, FDA issued a deficiency letter. This letter informed the sponsor that all removed implants should be reported to the MAUDE system. In addition, supplemental data were required to be submitted for the conditions of approval related to patient follow-up and adverse event reporting. After FDA’s review of the sponsor’s 2004 annual reports, the outstanding concerns from the 2002 and 2003 reports remained. For example, issues regarding lack of patient follow-up were unresolved. At the time of the 2004 annual reports, the sponsor submitted data for 75 out of a total of 183 patients for whom data should have been provided. The sponsor maintained that the events related to the removed devices were not caused by device failure or function and concluded that they did not require reporting to FDA. Subsequently, FDA took action on the 2004 annual reports by sending another deficiency letter to the sponsor. In addition, FDA required that the sponsor submit a complete account of all patients to clarify its analysis of patients who were lost to follow-up. According to FDA officials, the sponsor’s response to these deficiency letters did not resolve the outstanding concerns. As a result, the review team raised the concerns with FDA’s Office of Compliance and the sponsor was inspected from July 29 through August 11, 2003. During its inspection, FDA found that the sponsor’s devices may have malfunctioned or caused or contributed to serious injuries. The inspection results also showed these adverse events had not been reported by the sponsor as required. In response to these findings, FDA issued a warning letter on February 24, 2004, requiring the sponsor to submit written medical device reports for specific adverse events detailed in the letter within 15 working days of receipt. When the sponsor did not adequately respond to the warning letter, FDA filed an administrative complaint on July 14, 2005, for civil monetary penalties, which resulted in a decision from an administrative law judge in favor of FDA on July 6, 2007. A separate decision is expected on the amount of the penalties to be assessed, after which either side may appeal. The FDA’s Office of Regulatory Affairs instructed the review team not to pursue any deficiencies found in the sponsor’s annual reports until the matter is resolved. Therefore, the review team has reviewed TMJ Implants, Inc.’s, 2005 and 2006 annual reports, but decisions on the sponsor’s compliance with the conditions of approval are pending. In commenting on a draft of this report, HHS provided clarification on the postmarket requirements that apply to approved devices and updated information concerning the administrative complaint for civil monetary penalties. We revised our report to reflect these comments. It also provided technical comments, which we incorporated, as appropriate. HHS’s comments appear in appendix II. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time we will send copies of this report to the Secretary of HHS, the Commissioner of the FDA, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. While the Food and Drug Administration (FDA) addressed most concerns for each of the four temporomandibular joint (TMJ) implants we reviewed, we identified a number of concerns that were left unaddressed—concerns that were not offset or countered by a condition of approval or by FDA correspondence with the sponsor—upon approval. These unaddressed concerns fell into two of the four categories of concerns we identified previously: study protocol and other concerns. Table 2 lists the unaddressed concerns using the categories we established in table 1. In addition to the contact named above, Geraldine Redican-Bigott, Assistant Director; Deirdre Brown; Cathy Hamann; Julian Klazkin; Michaela M. Monaghan; and Sari B. Shuman made key contributions to this report.
|
It is estimated that over 10 million people in the United States suffer from jaw joint and muscle disorders. Artificial temporomandibular joint (TMJ) implants have been used to replace the jaw joint in some patients in an effort to decrease pain and increase jaw function. The safety and effectiveness of these implants, like other medical devices, is overseen by the Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS). Two implants used in the 1970s and 1980s that were later removed from the market caused severe side effects for some patients. In 1998, FDA began to require certain TMJ implant manufacturers sponsoring these devices to demonstrate the implants' safety and effectiveness before receiving approval. Since 1998, four TMJ implants from three sponsors were approved. In response to your request, GAO described (1) the types of concerns raised by FDA and how it addressed these concerns for the implants approved since 1998 and (2) how FDA has monitored sponsors' compliance with conditions of approval. GAO examined documentation related to the four TMJ implants approved by FDA since 1998 and sponsors' annual reports, which FDA uses to monitor compliance with conditions of approval. GAO also interviewed FDA officials, TMJ implant sponsors, and patient advocacy groups. FDA officials raised concerns during the approval process that were similar for all four TMJ implants. These concerns generally involved the adequacy of the sponsors' clinical study protocols, patient follow-up, engineering testing, and other matters, such as device labeling. FDA addressed many, but not all, concerns upon approval. Some concerns were addressed by obtaining additional information from sponsors to clarify and supplement data contained in their device applications before approval. Other concerns were addressed when FDA approved the implants but required sponsors to comply with certain conditions of approval, such as continuing clinical studies postmarket and collecting patient data. Because FDA staff, who review the device applications, and FDA management, who approve the devices for marketing, held differing views as to whether the implants' health benefits outweighed its risks, they did not agree on the approval decisions of two of the four TMJ implants. FDA management acknowledged that the concerns raised about the implants were legitimate. However, they ultimately concluded that the benefits provided by these two devices outweighed the concerns and approved both devices to help patients obtain relief from chronic pain. FDA monitored sponsors' compliance with conditions of approval by evaluating information contained in their annual reports. FDA often required additional actions by the sponsors to resolve questions that were raised through its review of these reports. However, GAO found that not all annual reports were received by FDA. At the time GAO conducted its work, FDA had only received 13 of 18 required reports. One implant sponsor did not submit 5 of 7 required annual reports. FDA has requested these reports and has issued draft guidance on annual report submissions to all medical device sponsors. In addition, when reviewing the available annual reports to determine if sponsors were complying with conditions of approval, many of the submitted reports did not provide FDA with sufficient information to assess compliance. FDA required these TMJ implant sponsors to provide additional information to address this lack of sufficient information. In most instances, once FDA received additional information from the sponsors, the annual reports were considered adequate. However, one sponsor submitted several annual reports for both of its devices that FDA said lacked sufficient information regarding patient follow-up and also underreported problems experienced by patients associated with the devices. FDA notified the sponsor that it must address these concerns, but the sponsor repeatedly provided inadequate responses. This situation ultimately led FDA to inspect the sponsor's records and file an administrative complaint for civil monetary penalties against the sponsor for failure to file certain reports with FDA. On July 6, 2007, an administrative law judge ruled in favor of FDA. In commenting on a draft of this report, HHS provided clarification on postmarket requirements for approved devices and updated information on the administrative complaint for civil monetary penalties.
|
In fiscal year 1996, the Forest Service reported that, for the first time, the expenses associated with preparing and administering timber sales exceeded the receipts generated by the sale of the timber. This loss heightened the interest in the financial status and spending practices of the Forest Service. After we reported that indirect expenditures had nearly doubled in 5 years, legislation was introduced in the House of Representatives (H.R. 4149) to improve the fiscal accountability of the Forest Service through an improved financial accounting system. This legislation would first limit, and then eliminate, all indirect costs that could be charged to the funds. In addition, H.R. 4193, the appropriations bill for the Department of the Interior and related agencies for fiscal year 1999, contains provisions that would limit indirect costs to 25 percent of total costs for the Salvage Sale Fund and eliminate them entirely for the K-V Fund. The Forest Service separates indirect costs into three main categories: line management, common services, and program support (see fig. 1). Within each of these categories, its accounting system further divides such costs into two subcategories, differentiated on the basis of whether the cost can readily be identified with a specific project or function. For example, personnel support provided to the timber program can be readily identified with a single program, while a management position providing leadership for many programs (e.g., a forest supervisor) cannot. Costs that can be readily identified with a project or program are called “benefiting function” costs; costs that cannot be so identified are called “general administration” costs. Line management: Covers costs related to line officers and their identified staff. Line officers include district rangers, forest supervisors, regional foresters, and specifically named positions. Costs that can be assigned include salary, travel, training, vehicle-use, and secretarial support costs. Common services: Covers nonpersonnel costs associated with providing space and a working environment for employees. It includes such costs as those for rent, utilities, communications, radio, office and computer equipment, mail and postage, office supplies, and forms. Program support: Covers costs to coordinate, manage, and execute business activities, community involvement, and common service activities. It includes such costs as those for salaries, travel, and vehicle use for employees involved with coordinating and managing program support. The Forest Service derives its funding from two main sources—congressional appropriations and trust and permanent funds such as the five we reviewed. Both sources of funding are used to pay for relevant indirect costs, but the funding mechanisms operate somewhat differently for each source. When indirect costs are charged to appropriations, benefiting function costs are charged to the appropriations made specifically for a program, while general administration costs are charged to a separate budget line item that covers general administration costs for all programs. When indirect costs are charged to a trust or permanent fund, both the general administration and benefiting function costs are paid for by the fund. Since 1995, Forest Service’s guidance has called for offices to separate the accounting of these costs into the two subcategories of general administration and benefiting function, although doing so is not mandatory. The Forest Service identified four main factors that have contributed to the increase in indirect expenditures. However, year-to-year and office-to-office differences in the accounting system hamper any effort to determine the effect of any of these factors. Neither we nor Forest Service officials can isolate the effect of these factors from the effects of inconsistencies in the way the accounting system was implemented. According to the Forest Service, the four factors contributing most to indirect cost increases during the 5-year review period were the implementation of the emergency salvage timber sale program, employee buyouts, the late assignment of costs, and a new computer system. In July 1995, the Congress established the emergency salvage timber sale program, commonly called the salvage rider. It was intended to increase the amount of salvage timber offered and sold by instituting an expedited sale process. As a result, regions with a large need for salvage sales (among the regions we reviewed, the Pacific Southwest and Pacific Northwest regions) experienced a sharp increase in both direct and indirect expenditures to the Salvage Sale Fund. The rider ended on December 31, 1996, but indirect expenditures continued to increase in two of the four regions we reviewed through fiscal year 1997. A regional official attributed this continued increase to the time lag between when the direct “on-the-ground” work ended and when the work necessary to administer and close the contracts and finish other administrative tasks was completed. The Federal Workforce Restructuring Act of 1994 (Pub.L. 103-226) authorized executive agencies, including the Forest Service, to conduct a buyout of employees who met certain criteria and wanted to leave the agency. A buyout incentive payment of up to $25,000 per employee was to be paid from appropriations or funds available to pay the employee. The act also required agencies to pay the Office of Personnel Management: (1) for fiscal years 1994 and 1995, 9 percent of the basic pay for each employee that left and (2) for fiscal years 1995 through 1998, $80 for each remaining permanent employee (termed a “head tax”). Consequently, Forest Service regions with large staffs in positions classified as indirect have experienced increases in indirect expenditures. For example, for the 5 years we reviewed, the Pacific Northwest Region had more employees—some of them in indirect positions—than any other region and accounted for almost half of the $2.5 million charged to the funds since fiscal year 1994 for the head tax in the offices we reviewed. Similarly, indirect expenditures for the Salvage Sale Fund at the Washington Office increased almost $1.1 million between fiscal year 1994 and 1995. Of this amount, the Forest Service’s accounting records show that $211,423 was the result of the head tax. For fiscal year 1995, Forest Service officials stated that the National Finance Center requested that the agency account centrally for the head tax because the Center’s computer system could not appropriately account for it. As a result, the Washington Office funded the entire $2.6 million assessed to the agency in that year, which included the $211,423 indirect cost charged to the Salvage Sale Fund. The Pacific Southwest Region experienced an increase in indirect expenditures charged to the K-V, Brush Disposal and Salvage Sale funds in fiscal year 1997. According to regional office officials, this increase occurred because the Washington Office billed the region for about $5 million in charges for rent, telephones, and unemployment and disability payments that the region had incurred in fiscal years 1992, 1993, and 1994. The region had not expected to be billed for these costs at such a late date, so it had dismissed the associated obligations for those years. Other offices were also affected by this late assignment of costs. The regions and most funds experienced a rise in expenditures in fiscal years 1995, 1996, and 1997 due to a modernization of the Forest Service’s computer system. Agency officials stated that the software license fee contract associated with this modernization is funded centrally through the Washington Office. For the funds reviewed, this license fee increased the Washington Office’s indirect expenditures by $762,000 in fiscal year 1996 and $885,000 in fiscal year 1997. Regions are assessed for their share of the hardware and related technical support costs. Between fiscal years 1993 and 1997, charges for computer related indirect expenditures to the Salvage Sale Fund in the Pacific Northwest Region, for example, increased from $22,000 to $556,000. Although Forest Service officials could broadly quantify the rise in indirect expenditures associated with the four major factors just discussed, they could not separate what increases were specifically attributable to these factors from those caused by inconsistencies in the way indirect costs are recorded. During the 5-year review period, definitions of indirect costs changed, and offices often decided how, when, and whether to implement guidance issued by the Washington Office. Changing definitions and inconsistent implementation of accounting system guidance created data that were not comparable from year to year or office to office. In order to determine why costs increased, it is necessary to have data that are comparable from year to year. However, over our 5-year review period, the instructions explaining how to account for indirect costs changed several times. For example, agency officials stated that prior to fiscal year 1994, there was no central and specific policy on how rent, utilities, and communications costs were to be charged. While the majority of the cost for rent charged to the funds was classified as a direct cost, some offices classified rent as an indirect cost, and still others classified it as both. The Forest Service recognized that a better system was needed to track indirect costs. Towards this end, the Forest Service established the common services category for rent, utilities, and communications costs in that fiscal year. However, the impact of this new account cannot be clearly measured. For example, in fiscal year 1993, rent charged to the Salvage Sale Fund in the Southwestern Region included $12,000 classified as direct and $1,000 classified as indirect. In fiscal year 1994, the Salvage Sale Fund had no rent classified as direct and $23,000 classified as indirect. Although rent costs charged to this fund nearly doubled, it is unclear how much of this change was attributable to an actual increase in rent and how much was attributable to the use of the new common services account. Other changes were brought about by policy decisions. For example, in fiscal year 1993, indirect expenditures as a percentage of total expenditures were less than 1 percent for the Reforestation Trust Fund. After determining that not all regions were assessing this fund for general administration costs, the Washington Office directed that the fund be assessed starting in fiscal year 1994. After the Washington Office’s request, national indirect expenditures for this fund jumped to 13 percent of its total expenditures in fiscal year 1994. However, a Washington Office official stated that the Office inadvertently excluded this fund from its own general administration assessments until fiscal year 1996. Again, we cannot determine the extent to which the increase reflects a cost increase or simply the assessment of the fund for general administration costs. Even when provided with direction from the Washington Office, individual offices will often determine how, when, and whether to implement various aspects of the current accounting system for recording indirect expenditures. This independence adversely impacted the Forest Service’s ability to provide us with the specific amounts associated with the reasons given for cost increases. For example, although instructed to start assessing the Reforestation Trust Fund for general administration costs starting in fiscal year 1994, the Rocky Mountain and Southwestern regions did not start doing so until fiscal year 1995. Such choices produce inconsistencies that affect the comparability of data and the ability to isolate specific reasons for expenditure increases. To similar effect, individual offices make many of the decisions regarding how to assess and allocate indirect costs to the funds and whether to classify certain costs as direct or indirect, as these examples show: The Pacific Southwestern Region lets its forests decide how to allocate unemployment costs, according to an official there. Some forests consider these costs direct, and others consider them indirect. As a result, the expenditures reported by the region contain amounts that are classified differently from forest to forest. In the Rocky Mountain Region, almost no indirect expenditures are charged to the Cooperative Work—Other Fund because, according to a regional official, some managers are reluctant to burden the fund’s contributors—such as commercial users of forest roads—with indirect costs. The regional supplement to the Forest Service Manual supports this decision by saying that “Contributors do not need to be assessed for overhead charges if the contributors are unwilling to accept them.” Since fiscal year 1995, the Rocky Mountain Region has classified rent charged to the Brush Disposal Fund entirely as an indirect cost, whereas the Southwestern Region has classified rent charged to the same fund both as a direct and an indirect cost. In the Pacific Southwest Region, an official stated that expenses for timber resource clerks in some forests are classified as a direct cost to the funds but in other forests are classified as an indirect cost. At the Washington Office, officials told us that the majority of the increase in the indirect expenditures to the Salvage Sale Fund in fiscal years 1996 and 1997 occurred to charge the correct amount and to compensate for what were determined to be underassessments for general administration during fiscal years 1993-95. Although the Forest Service can trace some of the increases in indirect costs to the four major factors discussed above, changing definitions and inconsistent implementation of policies hamper the agency’s efforts to explain all the increases. For example, in the Pacific Southwest Region, four indirect expenditure categories in the Salvage Sale Fund increased $2.5 million from fiscal year 1995 to 1996. Financial records show that indirect automated data processing (ADP) expenditures rose $221,000; rent by $103,000; salaries by $251,000; and materials, supplies, and other services by $1.9 million. While the increase in ADP expenditures might be explained by the additional expenditures associated with the modernization of the computer system, regional officials cannot specifically isolate the amount that rent or salaries rose because of factors such as the salvage sale rider from increases that may have resulted from policy changes. The explanation of why the region had such a sharp increase in materials, supplies, and other services illustrates another reason why indirect cost increases are so difficult to isolate. Agency officials explained that this increase occurred because it was the Salvage Sale Fund’s turn to pay for the “pooled” general administration assessment for the Forest Service’s contract with the National Finance Center. Forest Service regions often “pool” the assessment to simplify budgeting procedures. For example, instead of assessing each fund individually for its share of costs from the National Finance Center, each fund places its allotted share for general administration into a pool, with the entire cost then being shown as charged against one fund instead of five. In this case, it was the Salvage Sale Fund’s year to bear the pooled amount for the National Finance Center. While pooling may simplify budgeting procedures, it has hindered efforts to isolate and explain individual cost increases. Overall, the Forest Service reduced its permanent staff by 14 percent during the 5-year period of our review, and individual offices implemented additional measures designed to reduce costs. Most of these efforts have been aimed at reducing costs generally and have not been targeted specifically at indirect expenditures. The congressional appropriations committees also reduced the budget line item for general administration during the period, but one way the Forest Service responded to the decrease was by reclassifying some general administration activities as benefiting function activities. The regions actively participated in the Forest Service’s national downsizing effort. In the four regions we reviewed, the downsizing resulted in staff reductions ranging from 8 to 23 percent. However, during the 5-year period, indirect salary expenditures charged to the five funds dropped appreciably only in the Rocky Mountain Region. They decreased slightly in the Pacific Southwest and Pacific Northwest regions and rose slightly in the Southwestern Region. The Washington Office also saw an increase. Because of the combined effect of the other factors already discussed, we cannot isolate the extent of the impact that downsizing had on indirect expenditures charged to these funds. Regions and forests also pursued other measures designed to reduce both direct and indirect costs. These included closing offices, consolidating offices, and centralizing administrative functions. During our 5-year review period, a total of five district offices were closed in the four regions we visited. Estimates of cumulative savings from these five closures totaled about $1.6 million, but the savings were not identified as direct or indirect costs. Regional officials stated that closing offices is a very effective way to reduce costs, but they consider it a time-consuming and complicated process. A Washington Office official noted that for fiscal years 1996 and 1997, provisions in the appropriations law prohibited the agency from closing offices without specific congressional approval. He also stated that even before being submitted to the Congress for approval, the proposed closures must first be approved by the Washington Office. The whole approval process can take 2 years or more to complete. Compared with office closures, office consolidations and the centralization of certain administrative functions were more commonly used in an effort to reduce total costs. While the Washington Office must approve office consolidations, the process is less complicated than the one for closures. In fiscal year 1998, the Washington Office has approved 15 ranger district consolidations involving 31 district offices in the regions we reviewed. During our 5-year review period, examples of specific consolidations and efforts to centralize administrative functions included these: For the Black Hills National Forest, we were told that three district ranger positions were eliminated when six districts were consolidated. The remaining three district rangers oversee the six offices. According to a regional official, in fiscal year 1996 the Southwestern Region received approval to consolidate two districts in one forest. Both offices would remain open, and they would share a district ranger. According to a Rocky Mountain Regional official, in fiscal year 1996 the region organized its forests into three administrative zones. By combining 16 units into three zones and thereby centralizing such administrative processing functions as contracting and procurement, the region was able to reduce administrative costs. According to a regional official, in fiscal year 1994 the Pacific Southwest Region instituted its “Excellence in Administrative Organization” project in an effort to control indirect costs. The region was divided into five provinces, and certain types of administrative operations, such as accounting, budgeting, and contracting, were centralized. In the Southwestern Region, as a result of consolidation, we were told that three forests share a contracting officer and personnel staff. Also, the region no longer has its own aircraft safety officer; it now shares one with the Rocky Mountain Region. The Senate and House appropriations committees, in their committee and conference reports, recommended a specified amount each year for the general administration budget line item within the National Forest System appropriation. This budget line item applies only for general administration activities associated with appropriations and cannot be used to fund general administration activities applicable to the trust and permanent funds. Between fiscal year 1993 and fiscal year 1997, the committees reduced the recommended amount for general administration costs by about 14 percent. However, this reduction did not result in a corresponding decrease in indirect costs. One way that the Forest Service has been able to comply with the reduction was by reclassifying costs previously considered general administration costs to other indirect cost categories. In doing so, the agency also implemented recommendations made by the National Forest System General Administration Task Force in a 1992 report that was provided to the appropriations committees, and the Forest Service described the reclassifications in the explanatory notes of its budget. Reclassifications included these: In fiscal year 1993, for each forest supervisor’s office, the expenses for the forest supervisor, deputies, and their secretarial support were classified as general administration costs. By 1995, only the costs for the forest supervisor and one secretary could be charged as general administration. The other positions were reclassified and are now charged to other indirect cost categories. In fiscal year 1993, up to five district ranger positions were included in general administration. In fiscal year 1997, all such district support could not be included in general administration and was reclassified. The general administration budget line item cannot be used to fund general administration activities in the permanent and trust funds; therefore, the Forest Service has developed a method of assessing the funds for such charges. The method used results in a percentage reflecting the portion of the budget that general administration represents. If the general administration budget line item is 12 percent of the total budget to which it applies, then the Forest Service limits the general administration costs that may be assessed to the funds to 12 percent of each fund’s annual program level. Amounts that can be charged for other indirect cost categories are limited by budget constraints. We were told by agency officials that, in practice, many forests have chosen not to separately identify general administration costs from other indirect costs charged to the funds. Agency officials stated that forest officials found the distinction confusing and unnecessary because all the costs charged to a fund are paid for by that fund. In an effort to reduce overall costs, the Forest Service has closed and consolidated offices, downsized, and centralized certain administrative functions. However, these measures were not enough to keep indirect expenditures from almost doubling in 5 years. Because individual offices will often decide how to account for indirect costs, the accounting system will not yield the data necessary to measure the savings in indirect costs resulting from these actions. This condition is made worse by definitional and other shifts that allow costs simply to be reclassified. Only after consistent and reliable indirect cost data are produced can trends and comparisons be studied and informed decisions made. An essential first step for the Forest Service in controlling indirect costs is to know clearly what these costs are from year to year and office to office. A starting point for this effort involves establishing clear definitions for indirect costs and applying them consistently over time. In this regard, the Forest Service can be helped by a recent addition to the federal financial accounting standards. In July 1995, the Financial Accounting Standards Advisory Board, the group that recommends accounting principles for the federal government, released Statement of Federal Financial Accounting Standard (SFFAS) No. 4, Managerial Cost Accounting Concepts and Standards for the Federal Government. Effective for federal agencies starting with fiscal year 1998, this standard is “aimed at providing reliable and timely information on the full cost of federal programs, their activities, and outputs.” Although the Forest Service was required to use the principles set forth in SFFAS No. 4 on October 1, 1997, we were told by a Washington Office official that the agency currently has a team discussing the possibility of applying the principles to its existing accounting system. Properly implementing this standard will go a long way towards providing cost data upon which informed decisions about reducing costs can be based. Of necessity, this endeavor will mean some changes in the way the Forest Service classifies costs as direct or indirect, as the following examples show. Unemployment and Disability Costs. About 59 percent of the total unemployment and disability costs charged to the five funds we reviewed were indirect, totaling more than $16 million over the 5 years. If an employee normally charges his or her time directly, then we believe that SFFAS No. 4 requires that associated unemployment or disability costs should also be charged directly. Because 76 percent of all salary costs charged to the funds during the past 5 years were classified as direct, proper implementation of SFFAS No. 4 should result in a substantial lowering of the unemployment and disability costs classified as indirect. ADP Costs. In the regions we reviewed, 71 percent of the ADP costs charged to the five funds were classified as indirect—a total of almost $10 million in 5 years. Again, under SFFAS No. 4, we believe such costs would be classified as direct to the degree that the employees associated with the ADP costs normally charge their time that way. As with the assignment of unemployment and disability costs, we would expect ADP costs to mirror those of salaries and to be classified as direct whenever people to whom they are assigned charge their time directly. Classifying these types of costs as indirect overstates indirect costs overall and understates direct costs. Just as important as clarifying how costs should be classified, however, is ensuring that Forest Service offices apply these classifications consistently. If individual offices continue to vary in their decisions about how, when, and whether to implement accounting policies and definitions, the data produced will continue to have limited validity, and the Forest Service will have little reliable information upon which to judge whether indirect costs are truly rising or falling, let alone why. Because centralization represents such a change in the Forest Service’s approach of giving great latitude to local offices, oversight by Forest Service headquarters and regional officials will be crucial to this effort. Over the 5-year period we reviewed, the Forest Service took many actions to reduce costs, but indirect expenditures charged to the five funds reviewed increased nonetheless. Thus far, congressional attempts to affect indirect costs (through appropriations committees’ reducing the budget line item recommended for general administration) also appear to provide little assurance that such costs will actually be reduced. Instead, such costs have often been redefined into other indirect cost categories. However, incorporating the principles set forth in the Statement of Federal Financial Accounting Standard No. 4 would go a long way towards producing cost data that are consistent and reliable. But even the best guidance will not produce consistent and reliable data if it is not uniformly implemented by all offices. Solving these accounting system problems is an essential first step in controlling indirect expenditures. Once these problems are solved and indirect costs from year to year and office to office are clearly known, there is the opportunity for informed decisions about indirect costs and how to reduce them. At that point, approaches could include requiring the Forest Service to reduce indirect costs by a set amount and to report on what it has done or plans to do to achieve that reduction. To ensure that consistent and reliable cost data are available upon which to base management decisions and monitor trends, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following actions: Incorporate the Statement of Federal Financial Accounting Standards No. 4 into the Forest Service’s cost accounting system. Ensure that all offices consistently implement guidance with respect to accounting for indirect costs and hold the offices accountable by following up to make sure that the standards are being consistently used. We provided a draft of this report to the Forest Service for review and comment. The Forest Service’s letter commenting on the report (see app. IV) states that the agency concurs with our recommendations and that it is committed to developing definitions of indirect costs to be applied on a national basis. As we arranged with your offices, unless you publicly announce its contents earlier, we plan no further distributions of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Agriculture, the Chief of the Forest Service, and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you or your staff have any questions or wish to discuss this material further, please call me at (206) 287-4810. A permanent appropriation that uses deposits from timber purchasers to dispose of brush and other debris resulting from timber harvesting. It was authorized by the Act of August 11, 1916, ch. 313, 39 Stat. 446, as amended. (16 U.S.C. 490) A trust fund that uses deposits from “cooperators”—commercial users of the forest road system—for the construction, reconstruction, and maintenance of roads, trails, and other improvements. It was authorized beginning with the Act of June 30, 1914, ch. 131, 38 Stat. 415, as amended. (16 U.S.C. 498) A trust fund that uses deposits made by timber purchasers to reforest timber sale areas. In addition to planting, these deposits may also be used for eliminating unwanted vegetation on lands cut over by the purchasers and for protecting and improving the future productivity of the renewable resources on forest land in the sale areas, including sale area improvement operations, maintenance, construction, reforestation, and wildlife habitat management. The fund was authorized by the Act of June 9, 1930, ch. 416, 46 Stat. 527, as amended. (16 U.S.C. 576-576b) A trust fund that uses tariffs on imports of solid wood products to prevent a backlog in reforestation and timber stand improvement work. It was authorized by sec. 303 of the Recreational Boating Safety and Facilities Improvement Act of 1980, Pub.L. 96-451, 94 Stat. 1983, as amended. (16 U.S.C. 1606a) A permanent appropriation that uses receipts generated by the sale of salvage timber to prepare and administer future salvage sales. It was authorized by section 14(h) of the National Forest Management Act of 1976, Pub.L. 94-588, 90 Stat. 2949. (16 U.S.C. 472a (h)) Given the heightened interest in the financial status and spending habits of the Forest Service, you asked us to provide data on indirect expenditures charged to five Forest Service funds. We agreed to provide this information in two phases. In phase one, we provided information on the amount of indirect expenditures charged to these five funds between fiscal years 1993 and 1997. This second phase has the three objectives of identifying (1) the reasons why indirect costs rose, (2) actions taken by the Forest Service and others to control these expenditures, and (3) other actions that may help the Forest Service control such expenditures in the future. As agreed, we concentrated our detailed review on four regions and the Washington Office. Because the five funds are mainly timber-related, we chose the Pacific Southwest and Pacific Northwest regions because they have large timber programs. We chose the Rocky Mountain Region because it had lower indirect expenditures than any other region. We selected the Southwestern Region because it is similar in size to the Rocky Mountain Region yet had much higher indirect costs. We selected the Washington Office because its indirect costs fluctuated widely and increased significantly in some funds. Table II.1 provides each region’s location and the geographic area it covers. To obtain information on why indirect costs increased and what had been done to control them, we interviewed knowledgeable officials at each location visited. In addition, we reviewed pertinent files, financial records, studies, reports and manuals and asked follow-up questions as dictated by the document reviews. After gathering information on what had caused costs to rise and fluctuate, we reviewed various financial standards, laws, legislation, studies, and manuals to determine how the costs might be controlled in the future. We also interviewed Forest Service officials to obtain their suggestions. Because of an ongoing lawsuit involving indirect expenditures charged to the Cooperative Work—Knutson-Vandenberg Fund, you agreed that we should not include this fund in our analysis of why indirect expenditures increased. We conducted our review from May through August 1998 in accordance with generally accepted government auditing standards. Doreen S. Feldman Alysa Stiefel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on overhead costs for the Forest Service's Brush Disposal Fund, Salvage Sale Fund, Reforestation Trust Fund, Cooperative Work--Other Fund, and Cooperative Work--Knutson-Vandenberg Fund, focusing on: (1) the reasons why indirect costs rose; (2) actions taken by the Forest Service and others to control these costs; and (3) other actions that may help the Forest Service control these costs in the future. GAO noted that: (1) inconsistencies in the Forest Service's accounting system make it difficult to ascertain specifically why indirect costs rose for these five funds during fiscal years 1993-97; (2) according to the Forest Service, indirect costs rose for four main reasons: the implementation of a congressionally established program to increase the amount of salvage timber offered for sale, additional costs associated with downsizing, the allocation of costs incurred in previous years but not charged against the funds at the time, and computer modernization; (3) however, during this same time period, the Forest Service was changing its policies about how to account for indirect costs, and individual regions and forests were implementing these policies in markedly different ways; (4) as a result, the accounting system produced information that was not consistent from year to year or location to location; (5) neither GAO nor the Forest Service is able to say how much indirect costs increased as a result of the factors the Forest Service cites and how much they changed because of these accounting inconsistencies; (6) to control costs, the Forest Service took a number of actions, most of which were aimed at reducing costs generally and not targeted specifically at indirect costs; (7) in particular, the agency reduced its permanent staff by 14 percent, and individual regions used a variety of other measures, including closing some district offices, consolidating others, and centralizing certain administrative functions, such as contracting and procurement; (8) for their part, congressional appropriation committees reduced the budget line item for some indirect costs; (9) one way the Forest Service responded to the reductions was to reclassify some indirect costs to other accounts; (10) an essential step for controlling indirect costs is establishing clear definitions for them and applying the definitions consistently over time and across locations; (11) if implemented properly, a new accounting standard released by the Financial Accounting Standards Advisory Board, which recommends accounting principles for the federal government, will go a long way towards providing consistent and reliable data on the Forest Service's indirect costs; and (12) once the problems with the Forest Service's accounting system are solved and the agency's indirect costs are clearly known, there is the opportunity for informed decisions to be made on how to control them.
|
In fiscal year 2012, BOP had a budget of about $6.6 billion for salaries and expenses and as of December 2011, BOP had a staff of about 38,000, which includes administrative, program, and support staff responsible for all of BOP’s activities nationwide. BOP houses inmates across six geographic regions in 117 federal institutions, 15 privately managed prisons, 185 RRCs (also known as halfway houses), and home detention. At the close of fiscal year 2011, about 94 percent of BOP’s inmate population was incarcerated in either federal institutions or privately managed prisons, operating at four different security level designations: minimum, low, medium, and high. The designations depend on the level of security and staff supervision the institution is able to provide such as the presence of security towers; perimeter barriers; the type of inmate housing, including dormitory, cubicle, or cell-type housing; and the staff-to-inmate ratio. Some BOP institutions include multiple prison facilities with different security classifications under common management, in part to increase cost efficiencies. According to BOP, privately managed low-security facilities primarily house criminal aliens. Figure 1 shows the distribution of BOP institutions, privately managed prisons, and RRCs across BOP’s six geographic regions. To house inmates in community corrections locations, BOP contracts with private organizations to manage 185 RRCs around the country. These RRCs allow BOP to house inmates outside of a prison environment to either serve out their full sentence or their remaining sentence prior to release in the community. Inmates are authorized to leave for approved activities, such as seeking employment, working, counseling, visiting, or recreation, but are monitored 24 hours a day through sign-out procedures, regular head counts, staff visits to the approved locations, and random phone contacts. Inmates in RRCs are also required to work, or be actively seeking work, and to pay a percentage of their salaries as a subsistence fee to cover some of their expenses at the RRC. Some federal inmates are placed on home detention at the end of their prison term, either directly from an institution, or following some time in an RRC. Home detention describes all circumstances under which an inmate is serving a portion of his or her sentence while residing in his or her home. Home detention inmates are held to strict schedules and curfews and are monitored by a nearby RRC or the U.S. Probation Office through random staff visits, phone contacts, and occasionally through the use of electronic monitoring.inmate population was housed in RRCs or home detention. BOP has a number of discretionary authorities it can use to impact the period during which an inmate is incarcerated or remains in BOP custody. According to BOP officials, many of the programs that arise from these authorities are primarily intended to rehabilitate inmates and prepare them for reentry into society, as well as encourage good behavior while in BOP custody. The authorities can be classified into two main categories: (1) authorities that reduce the length of the inmate’s sentence, and (2) authorities that allow BOP to transfer an inmate out of prison to serve the remainder of his or her sentence in an RRC or home detention. Table 1 provides the statutory provisions allowing for BOP discretion to reduce a federal prisoner’s period of incarceration. BOP is required, subject to the availability of appropriations, to provide residential substance abuse treatment and make arrangements for appropriate aftercare for all eligible prisoners. Generally, the process to determine inmate eligibility for RDAP participation begins when inmates express interest in the program.nonviolent participants a sentence reduction incentive of up to 12 months In June 1995, BOP began offering for successful completion of the program. The amount of sentence reduction awarded upon completion is based on the length of an inmate’s sentence.complete RDAP, and receive a sentence reduction if eligible. Earned GCT credit does not vest until an inmate’s release date, meaning that all credit is vulnerable to forfeiture for disciplinary cause. generally considered for departures.offenses and being taken advantage of as common mitigating circumstances. One DHO we spoke with reduced a newer inmate’s disallowance because the inmate was manipulated by a fellow inmate to make a prohibited phone call for him. DHOs cited repeated offenses, egregious acts, and violence against correctional officers as common aggravating circumstances. One DHO we spoke with recalled forfeiting over 300 days of an inmate’s earned GCT credit after the inmate was involved in a prison riot. The six DHOs we spoke with told us that the disallowance guidelines were clear, and that the discretion to depart from the guidelines offered them sufficient flexibility and latitude to successfully impact inmate behavior. For example, DHOs cited first-time Although most prisoners receive all of their potential GCT credit, BOP’s method of awarding GCT credit at the end of each year an inmate serves results in a maximum of 47 days of GCT credit earned per year of sentence imposed rather than the 54 days that inmates who have contested BOP’s method in court maintain was the original intent of the statute. sentence guidelines with the understanding that inmates would receive GCT credit so that their actual time served would be 85 percent of the length of the sentence imposed by the judge, assuming good behavior. BOP’s method of awarding GCT, however, results in inmates serving more than 85 percent of their imposed sentences, even after earning the maximum GCT credit, as can be seen in table 4, for a hypothetical sentence of 10 years imposed by the sentencing judge. As authorized in statute, 18 U.S.C. § 3624(b), BOP awards “up to 54 days at the end of each year of the prisoner’s term of imprisonment,” or 54 days per year of sentence served. As applied by BOP, this results in 47 days earned per year of sentence imposed because inmates do not earn GCT credit for years they do not ultimately serve due to being released early. Inmate released during 9th year, after completing 8 years and 260 days 38 (GCT for the remaining 298 days is prorated to conform to the ratio of 54 days per 365 served) Total GCT days granted 470 Total GCT days granted per year of sentence imposed Total time served (days) The U.S. Supreme Court upheld BOP’s methodology against a challenge brought by inmate petitioners. However, BOP officials told us that the agency was supportive of amending the statute, and had submitted a legislative proposal to Congress such that 54 days would be provided for each year of the term of imprisonment originally imposed by the judge, which would result in inmates serving 85 percent of their sentence. BOP provided us estimates in December 2011 showing that if the GCT credit allowance was increased by 7 days, as proposed, BOP could save over $40 million in the first fiscal year after the policy change from the early release of about 3,900 inmates. As of December 2011, the legislative proposal had not been introduced on the floors of the House or Senate. Modification of an imposed sentence: BOP has authority to motion the court to reduce an inmate’s sentence in certain statutorily authorized circumstances, but that authority is implemented infrequently, if at all. The court, upon motion of the Director of BOP, may reduce the term of imprisonment after considering certain statutory factors to the extent that they are applicable, if it finds that “extraordinary and compelling reasons warrant such a reduction” (also known as “compassionate release”) and the reduction is consistent with applicable policy statements issued by the USSC. According to BOP officials, the Director has motioned sentencing judges for inmates’ early releases in a limited number of cases. For instance, BOP has historically interpreted “extraordinary and compelling circumstances” as limited to cases where the inmate has a terminal illness with a life expectancy of 1 year or less or has a profoundly debilitating medical condition. The USSC issued guidance that listed a number of additional circumstances, such as the death or incapacitation of the inmate’s only family member capable of caring for the inmate’s minor child or children. As of December 2011, BOP had not revised its written policy to explicitly include all of the circumstances noted in the USSC guidance although, according to BOP officials, the agency is reviewing two cases that would fall within these circumstances. Where “extraordinary and compelling circumstances” may exist, inmates generally must submit a request explaining their circumstances and their plans for housing, financial support, and medical care if granted an early release. The request is to proceed through multiple layers of review, including the inmate’s warden, the Regional Director, BOP’s Office of General Counsel, and the BOP Director, who may ultimately motion the court. BOP officials recorded that from calendar years 2009 through 2011, 55 requests for early release were approved by the BOP Director and brought as motions to a sentencing judge out of 89 requests approved at lower levels and received at BOP headquarters. The court, upon motion of the Director of BOP, may reduce a prison term after considering certain statutory factors to the extent that they are applicable, if (1) an inmate is over 70 years old, (2) has served at least 30 years in prison pursuant to certain sentences imposed by statute, (3) a determination has been made by the BOP Director that the inmate is not a danger to the safety of any other person or the community as provided by statute, and (4) such a reduction is consistent with applicable policy statements issued by the USSC. However, according to BOP officials, since the authority was enacted, BOP has had no inmates in its custody meeting these criteria and is considering how to implement this authority in the future if an inmate qualified. Generally, where a term of imprisonment is based upon a sentencing range that has subsequently been lowered by the USSC, upon motion of the BOP Director, the court may reduce the term of imprisonment. According to BOP officials the BOP Director does not directly motion the sentencing judge because this is generally accomplished by the U.S. Attorney’s Office as the litigating body of DOJ. In addition, BOP officials also stated that it is not necessary for the BOP Director to motion the judge because inmates and their counsel generally initiate the process. BOP supports the process in other ways, including educating inmates about the relevant guidelines changes, notifying the U.S. Attorneys Offices if inmates who appear to be eligible are missed, and processing inmate sentence reductions if granted by a sentencing judge. BOP has estimated that the retroactive change to the sentencing guidelines for crack cocaine offenses that went into effect on November 1, 2011, will result in 2,391 additional inmates being released from BOP custody from fiscal years 2012 through 2014, yielding an estimated cost savings of $160 million. Early release prior to a weekend or holiday: BOP releases inmates on the last preceding weekday prior to a release date that falls on a Saturday, Sunday, or legal holiday. Shock Incarceration Program: Although BOP retains the authority to operate the shock incarceration program, also known as boot camps, it discontinued the program in 2005 due to its cost and research showing that it was not effective in reducing inmate recidivism. Nonviolent, volunteer, minimum-security inmates serving sentences of more than 12 months but not more than 30 months were eligible for the program, which combined features of military basic training with traditional BOP correctional values to promote personal development, self-control, and discipline. Throughout the typical 6-month program, inmate participants were required to adhere to a highly regimented schedule of strict discipline, physical training, hard labor, drill, job training, educational programs, and substance abuse counseling. BOP provided inmates who successfully completed the program and were serving sentences of 12 to 30 months with a sentence reduction of up to 6 months. All inmates who successfully completed the program were eligible to serve the remainder of their sentences in community corrections locations, such as RRCs or home detention. A study of one of BOP’s shock incarceration programs, published in September 1996, found that the program had no effect on participants’ recidivism rates. According to BOP officials, those and other evaluation findings and the cost of the program led BOP to discontinue its use in 2005. Elderly Offender Pilot Program: Authorization for BOP’s elderly offender home detention pilot program expired in September 2010. Generally, the 2-year pilot program enabled BOP to transfer to home detention inmates who were at least 65 years old, had served at least 10 years and 75 percent of their non-life sentences, had no history of violence, sexual offenses, or escape or attempted escape from a BOP institution, and who BOP determined would be of no substantial risk of engaging in criminal conduct or endangering any person or the public if released and with respect to whom BOP had determined that release to home detention will result in a substantial net reduction of costs to the During the program, 71 inmates were transferred to federal government. home detention. The statute requires the Attorney General to monitor and evaluate each eligible elderly offender placed on home detention, and report to Congress concerning the experience with the program. According to BOP officials, this report has not been completed. We have ongoing work looking at the results and costs of the pilot in more detail, which we will report on later this year. 42 U.S.C. § 17541(g)(5)(A). federal sentencing judge orders that they run concurrently. This includes cases when a federal judge has not stated whether a state and federal sentence should run concurrently or consecutively. However, BOP may review, or the inmates may petition BOP to review, their cases to determine a federal sentencing judge’s intent. BOP reviews the inmate’s sentencing documents and custody history, and may also contact the federal sentencing judge to determine whether the judge intended that the state and federal sentences should be served consecutively or concurrently. For example, of the 538 cases BOP reviewed in fiscal year 2011, 99 requests to serve sentences concurrently were granted, for a total of about 118,700 days of sentence credit, 386 were not granted, and 53 were still under review as of the end of fiscal year 2011. Credit for criminal custody: BOP has the authority to grant credit for time served in criminal custody (such as time spent awaiting trial), and according to BOP policy, it considers detention by Immigration and Customs Enforcement (ICE) for the purposes of deportation to be administrative custody until criminal charges are brought against a detainee. According to BOP officials, BOP reviews inmate records for any criminal custody time that could be credited towards an inmate’s federal sentence. BOP reviewers may contact ICE for clarification of an inmate’s custody record, but, according to BOP officials, the various ICE districts keep records differently and a clear determination of when a federal charge was filed and an inmate’s criminal custody began may be difficult to achieve. BOP officials cited inmate ineligibility for placement in community corrections as the number one reason that all inmates do not get released through RRCs and one of the chief reasons that some inmates are precluded from participating in RDAP. Specifically, BOP’s RRC program statement prohibits certain inmates from placement in an RRC. For example, inmates with detainers, with sentences of 6 months or less, who refuse to satisfy BOP’s Financial Responsibility Program, or who are in civil commitment status are all ineligible for RRC placement. According to BOP, inmates with detainers are deemed inappropriate for placement in community corrections due to the increased risk of escape and for those with immigration detainers, the likelihood of deportation. Moreover, all inmates who have financial obligations, whether court-ordered restitution, court fees, or tax liabilities, must comply with the Financial Responsibility Program to participate in programming including community corrections. This ineligibility for RRC placement also disqualifies an inmate from placement in home detention. Figure 4 shows the number of inmates ineligible for RRC placement from April 2008 to March 2011. BOP officials stated that certain offenses committed by inmates may also make it difficult for BOP to place them in RRCs. For example, according to BOP officials, some RRCs are required to enter into agreements with communities regarding the type of inmates they will house and some communities have enacted local laws that prohibit the placement of certain inmates such as sex offenders and arsonists in a communal setting. Other reasons inmates may not be placed in RRCs include the inmate’s refusal to be placed or the inmate’s medical or mental health needs that could not be accommodated at the RRC. According to BOP officials, inmates may refuse RRC placement for a variety of reasons but the reasons for refusal cited most often by officials during our site visits to BOP facilities included: some RRC accommodations are perceived by some inmates to be subpar compared to prisons; some minimum-security and low-security inmates do not want to reside in RRCs with higher security inmates; and some inmates do not want to pay the 25 percent subsistence fee. To participate in RDAP, inmates must be able to complete both the institution and the RRC components of the program. As a result, inmates who are prohibited from transferring to RRCs are excluded from RDAP. For instance, BOP estimates that 2,500 criminal aliens would participate in RDAP each year, but are ineligible due to immigration detainers. Prior to a 1996 BOP policy change, inmates with detainers could complete the program by participating in transitional treatment within a BOP institution. However, according to BOP officials, transitional treatment within an institution is ineffective because the inmate remains sheltered from the partial freedoms and outside pressures experienced during an RRC placement. Realizing that potential cost savings could result from early releases of criminal aliens, among other reasons, BOP is considering changing its policy and allowing eligible nonviolent criminal aliens to complete the RDAP program without the RRC component and receive sentence reductions of up to 1 year for successful completion. According to BOP, this policy shift would require a rule change and the development of procedures to ensure that no U.S. citizen was displaced from participating in RDAP. BOP officials stated that decisions on this issue would not be made until expanded program capacity becomes available, which is currently uncertain. A lack of RRC beds limits BOP’s ability to further utilize RRC placements. Based on the most recently available data, in fiscal year 2010, about 29,000 inmates spent time in an RRC prior to release from BOP custody. Although BOP officials at institutions we visited stated that they assessed inmates on a case-by-case basis to determine the appropriate RRC placement length, the officials stated that referrals can be reduced due to RRC capacity constraints. According to BOP officials, in fiscal year 2010, about 2.7 percent of eligible inmates were denied placement due to a lack of bed space. BOP faces challenges in increasing its RRC bed space capacity, which limits its ability to increase the length of RRC placements. According to BOP community corrections officials, BOP has difficulty acquiring new RRC contracts and increasing its RRC capacity because of local zoning restrictions and the unwillingness of many communities to accept nearby RRCs. Although the Second Chance Act increased BOP’s flexibility to place inmates in RRCs for up to 12 months, as reported by BOP officials, challenges facing the expansion of its RRC capacity limit the impact of this increased flexibility. As of November 2011, BOP reported that available contracted RRC bed space was 8,859 estimated beds. For each available RRC bed, BOP can transfer one inmate to the RRC for a maximum of 12 months, or BOP could send multiple inmates for shorter placements (e.g., three inmates for 4 months each). As such, for this increased flexibility to have an impact on the average length of RRC placements, RRC capacity would need to increase. To provide all eligible inmates with the maximum allowable 12 months in an RRC, BOP would require about 29,000 available beds annually. Some inmates are more affected by capacity constraints than others, such as those with criminal records of sex offenses or those being released into urban areas with few RRCs. According to BOP, only a limited number of RRCs are able to accept sex offenders, and thus BOP, at the onset, has a limited number of RRC beds for sex offender placement. In addition, inmates releasing to urban areas may have their placement lengths reduced due to capacity constraints. For example, BOP staff we interviewed during our site visits identified shortages of RRC beds in Southern California, North Carolina, and the Washington, D.C. metropolitan area affecting the length of RRC placements. When referring inmates for RRC placements, BOP considers the inmate’s original sentencing location to facilitate transition and successful reentry. As such, BOP’s utilization of RRC placements is limited in geographical areas that do not have enough RRC beds to accommodate returning inmates. According to BOP officials, systemwide program capacity similarly constrains BOP’s utilization of RDAP sentence reductions—specifically, BOP’s ability to admit RDAP participants early enough to earn their maximum allowable sentence reductions. BOP officials stated that the RDAP sentence reduction incentive caused a backlog for entry into the program. Long wait lists resulted in inmates entering RDAP with insufficient time to complete the program in time to receive the maximum sentence reduction. In fiscal years 2007 and 2008, BOP reported to Congress that long wait lists (over 7,600 systemwide) prevented some eligible inmates from participating in the program at all—20 percent and 7 percent unable to participate, respectively. RDAP capacity, as measured by the number of program slots open to inmates at one time throughout BOP (6,685 in fiscal year 2011), has grown at a relatively steady rate since the program began in fiscal year 1989, and increased by 400 slots from fiscal years 2009 to 2011. According to BOP officials, as program capacity has increased in recent years, wait lists have been reduced, even with continued growth in the inmate population. This has enabled inmates to enter the program sooner and resulted in an increase in the percentage of eligible inmates who complete RDAP and receive the maximum sentence reductions from 14 percent in fiscal year 2009 to 25 percent in fiscal year 2011. However, according to BOP officials, RDAP is still catching up to the increased demand and continues to have wait lists. According to BOP officials, wait lists for entry into RDAP are currently prioritized in accordance with statute based on inmates’ proximity to their projected release dates which include GCT credit expected to be earned, but do not include the potential RDAP sentence reduction that eligible participants may earn. Two subject matter experts who advocate for inmate interests whom we spoke with stated that BOP could consider including the potential RDAP sentence reduction in inmates’ projected release date calculations. This could ensure that eligible inmates would enter the program sooner and in enough time to receive the maximum reduction. For example, if two inmates have the same projected release date, after accounting for GCT credit, but one inmate would be eligible for a 1-year sentence reduction on completion of RDAP while the other would not be eligible for a sentence reduction upon completion of RDAP, the inmate eligible for the sentence reduction would have a higher position on the wait list for entry into RDAP than the inmate ineligible for a sentence reduction. BOP has stated that if it were to prioritize RDAP entry in this way, some inmates who are not eligible for the sentence reduction would not be able to enter the program at all, as they would continually be displaced on the wait lists by inmates who are eligible for the sentence reduction. BOP is required by statute, subject to the availability of appropriations, to provide residential substance abuse treatment for all eligible inmates, regardless of their eligibility for the sentence reduction incentive, and thus must ensure that all eligible inmates are able to participate in the program prior to their release from custody. However, BOP was unable to provide documentation that including RDAP sentence reduction in computation of the projected release date would continually displace inmates eligible for RDAP but ineligible for the associated sentence reduction. BOP’s fiscal year 2012 budget request included an increase of $15 million for RDAP, which was not funded. According to BOP, the funding would have reduced RDAP wait lists and enabled eligible inmates to enter the program early enough to earn their maximum allowable sentence reductions. BOP stated that the $15 million increase would have covered 125 new drug treatment staff positions and would have allowed an BOP officials also additional 4,000 inmates to complete RDAP annually.told us that if BOP changes its policy to allow criminal aliens to participate in RDAP, the funding increase for RDAP proposed in the 2012 budget request would have been sufficient to allow this additional inmate population to participate in RDAP without impacting the ability of U.S. citizens to participate and receive the maximum available sentence reductions. Timely program admission would result in future cost savings through additional sentence reductions. For example, if every eligible RDAP participant who completed the program in fiscal year 2011 had received their maximum sentence reduction, BOP would have been responsible for 15,729 fewer months of inmate incarceration, yielding an estimated cost savings of about $13.2 million. BOP estimated that allowing criminal aliens to participate in RDAP and earn sentence reductions could offer about $25 million of additional cost savings each year. Federal inmate populations have been increasing and BOP is operating at more than a third over capacity. In addition, the absence of parole in the federal system and other federal statutes limit BOP’s authority to modify an inmate’s period of incarceration. Inmates, who earn their good conduct time, as most do, end up serving about 87 percent of their sentences. BOP’s housing of inmates in community-based facilities or home detention is a key flexibility it uses to affect a prisoner’s period of incarceration. However, BOP does not require its RRC contractors to separate the price of home detention services from the price of RRC beds. As a result, BOP lacks information on the price of home detention that could assist it in weighing the costs and benefits of alternative options for supervising inmates in home detention. While BOP is working to develop a process to require contractors to submit separate prices for the price of RRC beds and home detention services, without establishing a plan, including a time frame for development, BOP does not have a road map for how it will achieve this goal. To determine the cost of home detention and potentially achieve cost savings, we recommend that the Director of BOP establish a plan, including time frames and milestones for completion, for requiring contractors to submit separate prices of RRC beds and home detention services. We provided a draft of this report to DOJ for its review and comment. BOP provided written comments on the draft report, which are reproduced in full in appendix I. BOP concurred with the findings in the report. Prior to receiving BOP’s comment letter, on January 20, 2012, BOP’s audit liaison requested that the wording of our recommendation be changed from “requiring contractors to identify RRC costs and home detention costs separately” to “requiring contractors to submit separate prices of RRC beds and home detention services.” He stated that BOP was requesting this change because contractors are not required to disclose financial information, such as the actual costs to them of providing services to inmates, to BOP. Furthermore, the liaison stated that obtaining separate prices of RRC and home detention services will enable BOP to determine the price reasonableness of these services. We believe that BOP’s proposed language addressed the intent of our recommendation, and thus we modified the recommendation language. BOP concurred with our recommendation, as revised, and also provided technical comments which we incorporated into the report, as appropriate. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. David C. Maurer, (202) 512-9627 or [email protected]. In addition to the contact named above, Chris Currie, Assistant Director; Tom Jessor; Bintou Njie; Michael Kniss; Billy Commons, III; Pedro Almoguera; and Lara Miklozek made significant contributions to this report.
|
The Department of Justices Federal Bureau of Prisons (BOP) is responsible for the custody and care of federal offenders. BOPs population has increased from about 145,000 in 2000 to about 217,000 in 2011 and BOP is operating at 38 percent over capacity. There is no longer parole for federal offenders and BOP has limited authority to affect the length of an inmate's prison sentence. BOP has some statutory authorities and programs to reduce the amount of time an inmate remains in prison, which when balanced with BOPs mission to protect public safety and prepare inmates for reentry, can help reduce crowding and the costs of incarceration. GAO was asked to address: (1) the extent to which BOP utilizes its authorities to reduce a federal prisoners period of incarceration; and (2) what factors, if any, impact BOP's use of these authorities. GAO analyzed relevant laws and BOP policies; obtained nationwide data on inmate participation in relevant programs and sentence reductions from fiscal years 2009 through 2011; conducted site visits to nine BOP institutions selected to cover a range of prison characteristics and at each, interviewed officials responsible for relevant programs; and visited four community-based facilities serving the institutions visited. Though not generalizable, the information obtained from these visits provided insights. BOPs use of authorities to reduce a federal prisoners period of incarceration varies. BOP primarily utilizes three authoritiesthe Residential Drug Abuse Treatment Program (RDAP), community corrections, and good conduct time. (1) Eligible inmates can participate in RDAP before release from prison, but those eligible for a sentence reduction are generally unable to complete RDAP in time to earn the maximum reduction (generally 12 months). During fiscal years 2009 through 2011, of the 15,302 inmates who completed RDAP and were eligible for a sentence reduction, 2,846 (19 percent) received the maximum reduction and the average reduction was 8.0 months. BOP officials said that participants generally do not receive the maximum reduction because they have less than 12 months to serve when they complete RDAP. (2) To facilitate inmates reintegration into society, BOP may transfer eligible inmates to community corrections locations for up to the final 12 months of their sentences. Inmates may spend this time in contract residential re-entry centers (RRCs)also known as halfway housesand in detention in their homes for up to 6 months. Based on the most recently available data, almost 29,000 inmates completed their sentences through community corrections in fiscal year 2010, after an average placement of about 4 months; 17,672 in RRCs, 11,094 in RRCs then home detention, and 145 in home detention only. RRCs monitor inmates in home detention and charge BOP 50 percent of the daily RRC cost to do so. However, BOP does not require RRC contractors to separate the price of home detention services from the price of RRC beds and thus, does not know the actual costs of home detention. BOP officials stated that they are developing a process to review and amend existing RRC contracts and require new contractors to submit proposals separating out RRC and home detention prices, but did not document the specifics of the review process or establish time frames or milestones for the review. Thus, BOP does not have a roadmap for how it will achieve this goal. (3) Most eligible inmates receive all of their potential good conduct time credit for exemplary compliance with institutional disciplinary regulations54 days taken off their sentence, per year served, if an inmate has earned or is earning a high school diploma; 42 days if not. As of the end of fiscal years 2009, 2010, and 2011, about 87 percent of inmates had earned all of their available credit. BOP also has other authorities, such as releasing prisoners early for very specialized reasons, but has used these less frequently for various reasons. Inmate eligibility and lack of capacity impact BOPs use of certain flexibilities and programs that can reduce an inmates time in prison. BOP officials cited inmate ineligibility for RRC placement (e.g., inmates who are likely to escape or be arrested or with sentences of 6 months or less, among other things) as the primary reason that some inmates are not released through community corrections and one of the main reasons that some inmates are not able to participate in RDAP. BOPs lack of additional RRC space has prevented it from increasing the length of its RRC placements. According to BOP, lack of program capacity also prevents eligible inmates from entering and completing RDAP early enough to earn their maximum allowable sentence reductions, which prevents BOP from maximizing the cost savings provided by the authority. GAO recommends that BOP establish a plan, including time frames and milestones, for requiring contractors to submit prices of RRC beds and home detention services. BOP concurred with this recommendation.
|
On average, about 450 people have been injured and 14 people have been killed in train accidents each year over the past decade, from 1996 through 2005, exclusive of highway-railroad grade crossing and trespassing accidents. In recent years, a number of serious accidents raised concerns about the level of safety in the railroad industry. For example, as you are aware, in 2005, a train collision in Graniteville, South Carolina, resulted in the evacuation of 5,400 people, 292 injuries, and 9 deaths. FRA develops and enforces regulations for the railroad industry that include numerous requirements related to safety, including requirements governing track, signal and train control systems, grade crossing warning device systems, mechanical equipment—such as locomotives and tank cars—and railroad operating practices. FRA also enforces hazardous materials regulations issued by PHMSA as they relate to the safe transportation of such materials by rail. FRA’s inspectors generally specialize in one of five areas, called inspection disciplines: (1) operating practices, (2) track, (3) hazardous materials, (4) signal and train control, and (5) motive power and equipment. FRA’s policy is for inspectors to encourage railroads to comply voluntarily. When railroads do not comply voluntarily or identified problems are serious, FRA may cite violations and take enforcement actions, most frequently civil penalties, to promote compliance with its regulations. FRA is authorized to negotiate civil penalties with railroads and exercises this authority. FRA conducts additional oversight of Class I railroads through the Railroad System Oversight program. Under this program, the agency assigns an FRA manager for each Class I railroad to cooperate with it on identifying and resolving safety issues. FRA is a small agency, especially in relation to the industry it regulates. As of July 2006, FRA had about 660 safety staff, including about 400 inspectors in the field (in its regional, district, and local offices). In addition, 30 state oversight agencies, with about 160 inspectors, participate in a partnership program with FRA to conduct safety oversight activities at railroads’ operating sites. In contrast, the railroad industry consists of about 700 railroads with about 235,000 employees, 219,000 miles of track in operation, 158,000 signals and switches, and over 1.6 million locomotives and cars. In planning its safety oversight, FRA focuses its efforts on the highest priority risks related to train accidents through a number of initiatives. FRA’s May 2005 National Rail Safety Action Plan provides a reasonable framework for the agency’s efforts to target its oversight at the highest priority risks. The plan outlines initiatives aimed at reducing the main types of train accidents, those caused by human factors and track defects. Since issuing the plan, the agency has pursued additional initiatives to target risks posed by these causes. However, these efforts are in varying stages of development or implementation and, while some individual initiatives may start showing results in the next year or two, their overall impact on safety will probably not be apparent for a number of years. FRA has also developed a new approach for planning its inspections, based on greater use of its accident and inspection data. While these initiatives are promising, it is too early to assess their impact. In 2005, 72 percent of all train accidents in the United States were attributable to either human factors or track defects. Human factor accidents result from unsafe acts of individuals, such as employee errors, and can occur for a number of reasons, such as employee fatigue or inadequate supervision or training. Recent FRA initiatives to reduce accidents caused by human factors include proposed regulations aimed at reducing the most common causes of these accidents, such as improper positioning of track switches; a 5-year pilot project to establish a confidential voluntary system for reporting and learning from close call incidents; a study to develop a fatigue model that could be used by railroads to improve train crew scheduling practices and prevent worker fatigue; and a proposed 5-year pilot project that would use risk management to help reduce human factor accidents, as well as other types of accidents, at selected railroad worksites. Track defects, which can cause derailments, include rails that are uneven or too wide apart or rails or joint bars that are cracked or broken. Key recent FRA initiatives to reduce accidents caused by track defects include two additional track inspection vehicles that can precisely measure track during inspections; and new regulations on inspections of rail joints in continuous welded rail track and plans to develop additional regulations to improve railroads’ management of this type of track. These initiatives are in varying stages of development or implementation and use a variety of approaches, some quite innovative, for addressing the causes of human factor and track accidents. While they have the potential to eventually reduce these types of accidents, it is too early to predict their outcomes. The human factor initiatives, except for the proposed regulations, depend on voluntary actions by railroads, and, in some cases, labor as well, for their success. FRA has developed a new approach—the National Inspection Plan—for using available data to target its inspections at the greatest safety risks. The plan provides guidance to each regional office on how its inspectors within each of the five inspection disciplines should divide up their work by railroad and state. It is based on trend analyses of accident, inspection, and other data that predict locations where train accidents and incidents are likely to occur within each region and provide the optimal allocation of inspection resources to prevent accidents. Previously, FRA had a less structured, less consistent, and less data-driven approach for planning inspections. According to agency officials, each region prepared its own inspection plan, based on judgments about appropriate priorities and analysis of available data. However, the use of data was not consistent from region to region. Inspectors had greater discretion about where to inspect and based decisions about priorities on their knowledge of their inspection territories. FRA’s new approach for planning its inspection activity allows it to better target the greatest safety risks and make more effective use of its inspectors. However, it is not yet clear whether the new approach will lead to a prioritization of inspection levels across regions and inspection disciplines or improved safety. In carrying out its safety oversight, FRA identifies a range of safety problems on railroad systems mainly through routine inspections to determine whether operations, track, and equipment are in compliance with safety standards. FRA’s inspections do not attempt to determine how well railroads are managing safety risks throughout their systems. APTA, PHMSA, and Transport Canada have implemented approaches to oversee the management of safety risks by U.S. commuter railroads, U.S. pipelines, and Canadian railroads, respectively. These oversight approaches complement, rather than replace, traditional compliance inspections and therefore provide additional assurance of safety. FRA primarily monitors railroads’ compliance through routine inspections by individual inspectors at specific sites on railroads’ systems. Inspectors typically cover a range of standards within their discipline during these inspections. This inspection approach focuses on direct observations of specific components of the train, related equipment, and railroad property—including the track and signal systems—as well as operating practices to determine whether they meet FRA’s standards. (See fig. 2.) Inspectors also examine railroads’ inspection and maintenance records. The railroads have their own inspectors who are responsible for ensuring that railroad equipment, track, and operations meet federal rail safety standards. FRA also conducts more in-depth inspection efforts that generally focus on railroads’ compliance in a particular area, such as their inspections of employees’ adherence to operating rules. These efforts often involve a team conducting separate inspections at multiple sites, generally within one of FRA’s eight regions. FRA also periodically conducts in-depth inspections of some systemwide programs that railroads are required to implement, such as employee drug and alcohol testing programs. In 2005, federal and state inspectors conducted about 63,000 inspections. According to FRA, routine inspections constituted about 75 percent of the inspections of railroads, and in-depth inspections accounted for about 11 percent. The remainder of these inspections (14 percent) consisted of other types of activities, such as investigations of accidents and complaints. This approach to oversight enables FRA inspectors and managers to identify a wide range of safety problems. Inspectors identify specific compliance problems—conditions that do not meet FRA’s standards—at sites they visit by citing defects. Inspectors cite violations of safety standards for those defects that they believe warrant enforcement action. They consider a number of factors in making this decision, including the railroad’s history of compliance at that location and the seriousness of the noncompliance (such as whether it is likely to cause accidents, injuries, or releases of hazardous materials). Inspectors in some disciplines cite more defects and violations than others. (See fig. 3.) The motive power and equipment discipline cites almost half of all defects and over a third of all violations. FRA officials told us that the standards in this inspection discipline are the most prescriptive, making defects and violations easier to find. However, these types of defects cause a much smaller proportion of accidents than human factors and track defects. The most frequently cited violations include those for noncompliance with standards for locomotives and freight cars, track conditions, recordkeeping on the inspection and repair of equipment and track, and the condition of hazardous materials tank cars. FRA officials have noted that their approach of directly inspecting safety conditions and targeting locations that are most likely to have compliance problems provides a safety net and holds railroad management accountable. However, because the number of FRA and state inspectors is small relative to the size of railroad operations, FRA inspections can cover only a very small proportion of railroad operations (0.2 percent). Also, FRA targets inspections at locations on railroads’ systems where accidents have occurred, among other factors, rather than overseeing whether railroads systematically identify and address safety risks that could lead to accidents. Risk management can help to improve systemwide safety by systematically identifying and assessing risks associated with various safety hazards and prioritizing them so that resources may be allocated to address the highest risks first. It also can help in ensuring that the most appropriate alternatives to prevent or mitigate the effects of hazards are designed and implemented. A framework for risk management based on industry best practices and other criteria that we have developed divides risk management into five major phases: (1) setting strategic goals and objectives, and determining constraints; (2) assessing risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and results achieved. Other transportation oversight organizations have developed and implemented approaches for overseeing industries’ overall management of safety risks. In particular, during the last 10 years, APTA, PHMSA, and Transport Canada have developed and implemented such oversight approaches for U.S. commuter railroads, U.S. pipelines, and Canadian railroads, respectively. These approaches complement, rather than replace, traditional compliance inspections. APTA provides guidelines to commuter railroads on managing the safety of their systems—including safety risks—and audits their plans for and implementation of this management approach. PHMSA requires that pipeline operators develop “integrity management” programs to manage risk in areas—such as those that are densely populated—where leaks or ruptures could have the greatest impact on public safety and inspects operators’ compliance with these requirements. In Canada, the department responsible for overseeing railroad safety, Transport Canada, requires that railroads establish safety management systems that include risk management and assesses these systems. APTA, PHMSA, and Transport Canada have emphasized that risk management provides a higher standard of performance than traditional safety regulation based on compliance alone. We have reviewed PHMSA’s gas transmission pipeline integrity management oversight approach and have recently concluded that it enhances public safety. Operators told us that the primary benefit of the program is the comprehensive knowledge they acquire about the condition of their pipelines. APTA and Transport Canada officials have told us that their oversight approaches have not been formally evaluated to determine their effectiveness. FRA has taken some steps in a limited number of areas to oversee and encourage risk management in the railroad industry. For example, the agency has several regulations in place that require railroads to use a risk- based approach for managing safety in some specific areas, such as the operation of high-speed passenger trains. In addition, FRA is considering establishing a pilot project to examine how a risk management approach could be used voluntarily in the railroad industry to reduce human factor and other types of accidents. Oversight of railroads’ overall approach for managing safety risks on their systems, in addition to FRA’s existing discipline-specific, compliance- based oversight, has the potential to provide additional assurance of safety. However, developing and implementing such a new oversight approach would be a major undertaking for the agency, and FRA’s current initiatives to reduce train accidents need time to mature to demonstrate their effects. As a result, we did not recommend in our recent report that FRA adopt an approach for overseeing railroads’ management of safety risks. FRA has a broad range of goals and measures that it uses to provide direction to and track the performance of its safety oversight activities. However, its ability to make informed decisions about its inspection and enforcement programs is limited because it lacks measures of the intermediate outcomes, or direct results, of these programs that would show how they are contributing toward the end outcomes, or ultimate safety improvements, that the agency seeks to achieve. Furthermore, FRA has not evaluated the effectiveness of its enforcement approach. Both performance measures and evaluations can provide valuable information on program results that helps hold agencies accountable for their programs’ performance. To its credit, FRA has adopted a range of useful safety performance goals and related measures. These goals help the agency target its oversight efforts to achieve the department’s goals of reducing (1) the rate of rail- related accidents and incidents and (2) the number of serious hazardous materials releases. For example, FRA has recently established new agencywide safety goals that are aligned with its five inspection disciplines and its grade-crossing efforts. These include goals to reduce the rates of various types of train accidents—including those caused by human factors, track defects, and equipment failure—as well as hazardous materials releases and grade-crossing incidents. These departmental and agency goals represent the key end outcomes, or ultimate results, FRA seeks to achieve through its oversight efforts. FRA has also established related measures that help the agency determine and demonstrate its progress in meeting the desired goals. In addition, it has also established similar goals and measures for each of its eight regional offices. FRA also uses various other measures to manage its oversight efforts, such as numbers of inspections performed and enforcement actions taken. While FRA has developed a range of goals and measures related to its oversight of railroad safety, it lacks measures of the desired intermediate outcomes, or direct results, of its inspection and enforcement efforts—the correction of identified safety problems and improvements in compliance. (See fig. 4.) According to FRA officials, inspectors review reports on corrective actions provided by railroads and always follow up on serious identified problems to ensure that they are corrected. However, the agency does not measure the extent to which the identified safety problems have been corrected. FRA also lacks overall measures of railroads’ compliance. Officials have emphasized that the agency relies on inspectors’ day-to-day oversight of and interaction with railroads to track compliance. Without measures of intermediate outcomes, the extent to which FRA’s inspection and enforcement programs are achieving direct results and contributing to desired end outcomes is not clear. We recognize that developing such measures would be difficult and that it is challenging for regulatory agencies to develop such measures. Nevertheless, some other regulatory agencies in the Department of Transportation have done so. For example, the Federal Motor Carrier Safety Administration measures the percentage of truck companies that improve their performance in a follow- up inspection. By examining a broader range of information than is feasible to monitor on an ongoing basis through performance measures, evaluation studies can explore the benefits of a program as well as ways to improve program performance. They can also be used to develop or improve agencies’ measures of program performance and help ensure agencies’ accountability for program results. Although FRA has modified several aspects of its safety oversight in response to external and internal evaluations, it has not evaluated the extent to which its enforcement is achieving desired results. Under FRA’s current “focused enforcement” policy, developed in the mid- 1990s, inspectors cite a small percentage of identified defects (about 3 percent in 2005) as violations that they recommend for enforcement action, generally civil penalties. While this policy relies to a great extent on cooperation with railroads to achieve compliance and is intended to focus FRA’s enforcement efforts on those instances of noncompliance that pose the greatest safety hazards, it is not clear whether the number of civil penalties issued, or their amounts, are having the desired effect of improving compliance. Without an evaluation of its enforcement program, FRA is missing an opportunity to obtain valuable information on the performance of this program and on any need for adjustments to improve this performance. In the report we issued last week, we recommended that FRA (1) develop and implement measures of the direct results of its inspection and enforcement programs and (2) evaluate the agency’s enforcement program to provide further information on its results, the need for additional data to measure and assess these results, and the need for any changes in this program to improve performance. FRA did not express a view on these recommendations when it commented on our draft report. As part of our normal recommendation follow-up activity, we will work toward FRA’s adoption of our recommendations. Madam Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information on this statement, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were Judy Guilliams-Tapia, Bonnie Pignatiello Leer, and James Ratzenberger. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Although the overall safety record of the railroad industry, as measured by the number of train accidents per million miles traveled, has improved markedly since 1980, there has been little or no overall improvement over the past decade. Serious accidents resulting in injuries and deaths continue to occur, such as one in Graniteville, South Carolina, that resulted in 9 deaths and 292 injuries. The Federal Railroad Administration (FRA) develops safety standards and inspects and enforces railroads' compliance with these standards. On January 26, 2007, GAO reported on FRA's overall safety oversight strategy. (See GAO-07-149 .) The report discussed how FRA (1) focuses its efforts on the highest priority risks related to train accidents in planning its oversight, (2) identifies safety problems on railroad systems in carrying out its oversight, and (3) assesses the impact of its oversight efforts on safety. GAO recommended that FRA (1) put into place measures of the results of its inspection and enforcement programs and (2) evaluate its enforcement program. In reviewing a draft of that report, the Department of Transportation did not provide overall views on its contents or its recommendations. The statement is based on GAO's recent report. In planning its safety oversight, FRA is focusing its efforts on the highest priority risks related to train accidents through initiatives aimed at addressing their main causes--human behaviors and defective track--as well as through improvements in its inspection planning approach. FRA's May 2005 National Rail Safety Action Plan, the agency's overall strategy for targeting its oversight at the greatest risks, provides a reasonable framework for guiding these efforts. FRA's initiatives to address the most common causes of accidents are promising, although the success of many of them will depend on voluntary actions by the railroads. In addition, under the action plan, FRA has adopted a new inspection planning approach in which inspectors focus their efforts on locations that data-driven models indicate are most likely to have safety problems. In carrying out its safety oversight, FRA identifies a range of safety problems on railroad systems mainly by determining whether operating practices, track, and equipment are in compliance with minimum safety standards. However, FRA is able to inspect only about 0.2 percent of railroads' operations each year, and its inspections do not examine how railroads are managing safety risks throughout their systems that could lead to accidents. Such an approach, as a supplement to traditional compliance inspections, is used in the oversight of U.S. commuter railroads and pipelines and of Canadian railroads. GAO did not recommend that FRA adopt this approach because the agency's various initiatives to reduce the train accident rate have not yet had time to demonstrate their effects on safety. FRA uses a range of goals and measures to assess the impact of its oversight, such as (1) goals to target its inspection and enforcement programs at reducing various types of railroad accidents and (2) related measures, such as rates of track-caused accidents, to monitor its progress. However, FRA's ability to make informed decisions about these programs is limited because it lacks measures of their direct results, such as the correction of identified safety problems. Furthermore, FRA has not evaluated the effectiveness of its enforcement program.
|
The goal of SNAP, formerly known as the federal Food Stamp Program, is to help low-income individuals and households obtain a more nutritious diet. It does so by supplementing their income with benefits to purchase allowable food items. The federal government pays the full cost of the benefits and shares the responsibility and costs of administering the program with the states. Specifically, FNS is responsible for promulgating program regulations and ensuring that states comply with these regulations by issuing guidance and monitoring their activity. FNS officials at headquarters are assisted in this oversight work by officials in seven regional offices. FNS also determines which retailers are eligible to accept SNAP benefits in exchange for food and investigates and resolves cases of retailer fraud. State officials, on the other hand, are responsible for determining the eligibility of individuals and households, calculating the amount of their monthly benefits and issuing such benefits on an electronic benefit transfer (EBT) card in accordance with program rules. States are also responsible for investigating possible violations by benefit recipients and pursuing and acting on those violations that are deemed intentional. Intentional program violations include acts of fraud, such as making false or misleading statements in order to obtain benefits and trafficking (i.e., using benefits in unallowable ways, such as by exchanging benefits for cash or non-food goods and services or attempting to do so). Recipients can traffic benefits by: Selling benefits to retailers – recipients collaborate with retailers who exchange cash for SNAP benefits. For example, a retailer can allow a recipient to charge $100 on his or her EBT card and then pays the recipient $50 instead of providing food. Selling EBT cards to another person – recipient exchanges the EBT card and the corresponding Personal Identification Number (PIN) cash or non-food goods or services (e.g., rent or transportation). These sales can occur in person or by posting offers on social media and e-commerce sites. All of these trafficking activities may result in recipients having to give their EBT card and PINs to another person who may not return the card. Recipients can report sold EBT cards as lost or stolen to state agencies or EBT management contractors and receive new cards which can be used for future transactions, for example, when the benefits are replenished the next month. According to a September 2012 U.S. Department of Agriculture Office of Inspector General (USDA OIG) report, the magnitude of program abuse due to recipient fraud is unknown because states do not have uniform ways of compiling the data that would provide such information. Therefore, in the report, the USDA OIG recommended that FNS determine the feasibility of creating a uniform methodology for states to calculate their recipient fraud rate. As FNS seeks to address this recommendation, it is legally required to monitor its potential improper payments of SNAP benefits. The agency estimated an improper payment or error rate of the program at 3.4 percent, which represented an estimated $2.6 billion in wrongful payments, in fiscal year 2013. The percentage represents benefits distributed in error due to administrative as well as recipient errors, not all of which can be attributed to fraud. However, due to the large dollar amount involved in improper payments, the Office of Management and Budget (OMB) has placed SNAP on its list of high-error programs. Furthermore, after studying the cause of these errors, USDA officials stated that over 90 percent were due to verification errors. These types of errors occur when an agency fails to or is unable to verify recipient information—including earnings, income, assets, or work status—even though verifying information exists in third-party databases or other resources. Examples of verification errors include an agency not confirming a recipient’s reported earnings or work status through existing databases, or the recipient failing to provide an agency with information on earnings. Given FNS’s role of directly overseeing retailer eligibility and disqualification, federal officials have traditionally focused on retailer trafficking. In 1996, FNS was given legal authority to disqualify retailers by using EBT transaction data— which display suspicious patterns of benefit use—as its sole form of evidence. FNS maintains such transaction data within its Anti-Fraud Locator Using Electronic Benefits Transfer Retailer Transactions (ALERT) system. In our October 2006 report on potential retailer fraud, we found that federal officials had concerns about state efforts to address recipient trafficking, and recognized that retailer trafficking can only occur when willing recipients are involved. At the time of that report, federal officials told us that they were providing state officials with lists of recipients involved in their retailer trafficking cases, but many states were not acting on this information at the time because it was difficult and costly to prove individual trafficking cases. Furthermore, as we noted in our September 2010 report, the USDA OIG found that states were not analyzing their EBT data to detect misuse of benefits, largely because FNS did not require this. FNS has calculated a retailer trafficking rate, which was estimated to involve 1.3 percent of benefits issued from fiscal year 2009 to 2011—a total of $858 million. States must adhere to several requirements for detecting SNAP recipient fraud, conducting investigations and providing due process prior to disqualifying program violators. For example, states are required to have fraud detection units covering areas in which 5,000 or more households participate in the program; however, those working on fraud investigations need not be dedicated to this work full-time or exclusively to SNAP cases. States must also conduct data matches at the time of application and at other times to determine whether the information provided for a potential recipient is for someone who is incarcerated, deceased, or disqualified from the program. State SNAP agencies are responsible for pursuing judgments against those who intentionally violate SNAP rules. These judgments can be pursued within the state agency through an Administrative Disqualification Hearing (ADH) or through the judicial system in a court determined to have jurisdiction over the case. When a state decides to administratively pursue disqualification of a recipient for intentional program violations, the state is responsible for conducting a series of actions, such as providing timely notification to the recipient that there will be an ADH, and for states that have waiver procedures, that the recipients may waive their right to a hearing. If it is determined through the hearing or criminal prosecution that a person has intentionally violated program rules or the person has waived the hearing, only the person involved in the case is disqualified and not the entire household, but the entire household is responsible for repaying the specific ill-gotten or misused benefit amount. States are generally allowed to retain 35 percent of the fraud-related, overpaid benefits they collect, and the rest is returned to the federal government. In fiscal year 2012, states reported to FNS that they collected about $74 million in fraud-related claims. eDRS is also a data matching tool to help prevent improper payments and FNS requires that states check this system prior to providing benefits to an applicant. penalties, which vary based on the number and type of offense. Furthermore, states are required to report their fraud-related activity to FNS on their annual Program and Budget Summary Statements. This report, provided through the form FNS-366B, is to include the number of investigations and disqualifications, and the dollar amount of their fraud claims. The 11 states we reviewed employed a range of detection tools, but experienced mixed success in combating SNAP fraud. Although some were able to leverage additional resources, officials in most states reported challenges with potential fraud because their staff remained limited while recipient numbers grew. Furthermore, pursuing cases through administrative hearings and the courts generally resulted in disqualifications but collecting overpayments was a challenge. In the majority of states we reviewed, officials told us they were using well-known tools for detecting potential recipient eligibility fraud, such as data matching and referrals obtained through fraud reporting hotlines and websites. Specifically, all 11 states that we reviewed had fraud hotlines or websites, and all matched information about SNAP applicants and recipients against various data sources to detect those potentially improperly receiving benefits, as FNS recommended or required. (See table 1.) Beyond the required and recommended data matches, Florida, Texas, Michigan, and one county in North Carolina use specialized searches that check numerous public and private data sources, such as school enrollment, vehicle registration, vital statistics, and credit reports and other data on out-of-state program participation and benefit use to detect potential fraud prior to providing benefits to potential recipients. Florida officials told us that this focus on preventive efforts was key to helping them manage recent constraints on their investigative budgets. Specifically, Florida officials mentioned that when their investigative staff was reduced because of budget cuts in 2005, they shifted the majority of their anti-fraud resources from post-eligibility fraud investigations to preventing ineligible individuals from receiving SNAP benefits. This shift has allowed the state to more cost-effectively manage its efforts to combat potential fraud by developing detection tools against eligibility fraud and improper benefit receipt, such as identification verification software and profiles that case workers can use to identify error-prone applications. To address trafficking, officials in the 11 states reported that they analyzed patterns of EBT transactions and monitored replacement card data and online postings, as recommended or required by FNS. (See table 2.) When reviewing EBT transactions, state officials attempt to uncover patterns that may indicate trafficking, much in line with what FNS has been doing for years to uncover retailer fraud. Officials in two states mentioned that, for some cases, this EBT data analysis is done only after receiving fraud referrals through the hotline and websites. For example, while Florida officials reported that they routinely review EBT transaction data for suspicious patterns, Texas officials reported that they only review transactions for individuals or households after they have been referred to them because of potential fraud. The size and organization of the investigative units differed among the 11 states we reviewed, with wide variation in the number of staff available to investigate potential SNAP recipient fraud. For example, in 2013, Massachusetts and New Jersey had 498,580 and 432,270 recipient households, respectively, but Massachusetts, where SNAP was administered at the state level, had just 37 investigators, while county- administered New Jersey had nearly 300. Furthermore, the investigators in the 11 states we reviewed each had responsibilities unrelated to SNAP. Although officials in three states—Massachusetts, Tennessee, and Wyoming—reported that the majority of their investigations involved potential SNAP fraud, state investigators in all 11 states we reviewed were also responsible for pursuing fraud in other public assistance programs, such as Medicaid, Temporary Assistance for Needy Families, and child care and housing assistance programs. In North Carolina, fraud investigation was not the primary responsibility of some local officials who did this work; state officials reported that some counties opted to have caseworkers or program supervisors conduct fraud investigations. In general, state officials reported that limits on staffing levels are significant hindrances to their investigations of eligibility fraud and trafficking, with 8 of the 11 states we reviewed reporting inadequate staffing due to attrition, turnover, or lack of funding. Of the 10 states that were able to provide the information, the number of SNAP households per investigator increased in 8 states between fiscal years 2009 and 2013 by as much as 155 percent. In contrast, Maine and Michigan have increased their investigative staff, which decreased their household-to-investigator ratios in fiscal year 2013. (See fig. 1.) In their effort to combat potential fraud, some states implemented a way to leverage their available investigative resources. Specifically, four of the states we reviewed—Florida, Massachusetts, Michigan and Nebraska— had implemented and two states—Maine and North Carolina—were in the process of implementing state law enforcement bureau (SLEB) agreements. FNS has been supportive of states’ efforts to establish these agreements between state SNAP agencies and federal, state, and local law enforcement agencies which enable state SNAP investigators to cooperate in various ways with local, state, and federal law enforcement agents, including those within the USDA OIG. For example, under these agreements, law enforcement agencies can notify the SNAP fraud unit when they arrest someone who possesses multiple EBT cards, and SNAP agencies can provide “dummy” EBT cards for state and local officers to use in undercover trafficking investigations. According to officials in one Florida county, this type of cooperation allowed local police officers to make 100 arrests in its first undercover operation of recipients who were allegedly trafficking SNAP benefits. Furthermore, some state and local officials in Michigan, Maine, and Florida told us that increasing awareness of SNAP trafficking among local law enforcement officials helps in resolving these matters when potential trafficking is uncovered in other police investigations. For example, while investigating drug-related crimes, officials in those states told us they have uncovered multiple EBT cards in the possession of one person. In light of their increased SNAP caseload, some officials suggested changing the incentive structure to help states address the costs of investigating potential SNAP fraud. According to GAO’s Fraud Prevention Framework, investigations, although costly and resource-intensive, can help deter future fraud and ultimately save money. Officials in one state told us that it would help if FNS would provide additional financial incentives for states to prevent potential fraud at the time of application beyond what is currently provided for recovered funds. When fraud by a recipient is discovered, the state may generally retain 35 percent of the recovered overpayment, but when a state detects potential fraud by an applicant and denies the application, there are no payments to recover. Officials in four of the states we reviewed said that their anti-fraud efforts could be enhanced if the percentage of recovered overpayments that states may retain was increased, and officials in three states said that FNS should direct that states apply the retention money to anti-fraud efforts. Overall, state anti-fraud incentives have the potential to produce federal cost savings by encouraging state officials to prevent the benefits from being issued to ineligible people as well as deter fraud by more actively investigating and recovering funds. Officials in most of the 11 states we reviewed said that they have mainly pursued cases of eligibility fraud, such as the misrepresentation of household income or composition. In addition to testimony from witnesses, state investigators are able to build cases based on public records and employment statements to prove the misrepresentation. However, state officials reported that trafficking is more difficult to prove. Officials in North Carolina and a prosecutor in Michigan noted that trafficking cases involve two individuals breaking the law, and it can be difficult to get one to testify against the other. For example, the Michigan prosecutor told us about a case in which a landlord for a subsidized housing complex was receiving SNAP benefits in exchange for rent, and the tenants would not testify against this person because they thought she was doing them a favor by accepting the SNAP benefits as payment. State officials we interviewed also reported that the willingness of local prosecutors to pursue charges in court for SNAP fraud has varied across jurisdictions. Officials in eight states reported that a minimum dollar threshold of fraudulently-obtained benefits was required for prosecuting cases in court, ranging from $100 (in Tennessee) to $5,000 (in Texas). Prosecutors in some local jurisdictions were not willing to accept SNAP fraud cases at all. For example, prosecutors in one county in North Carolina told SNAP officials that they would not prosecute SNAP fraud cases because they need their resources for more serious criminal cases. Texas officials said that some local prosecutors in their state have also refused to prosecute SNAP cases due to workload concerns. Other prosecutors we interviewed said that to make efficient use of their limited resources, they have often sought plea deals that require the individual to repay the government rather than going to trial. Such plea deals may call for the individual to be arrested if the SNAP benefits are not repaid, and may also require that a person have a criminal record as a result of the plea. Furthermore, plea deals mitigate some of the unpredictability of trying a case before a jury. Prosecutors in Tennessee and Florida said that juries may be unwilling to convict individuals of SNAP fraud because they may be sympathetic to recipient claims that they do not understand government regulations or are compelled to commit fraud to support their families. SNAP officials in North Carolina said they were concerned about losing the deterrent effect of prosecutions due to the unwillingness of the judicial system to undertake SNAP recipient fraud cases. Recovering overpayments from individuals found to have committed fraud in either an administrative or a court proceeding has been a challenge, according to officials we interviewed in Florida and Michigan. Specifically, those officials reported that an individual who is disqualified may be required to repay an overpayment, but may not have enough income to do so. Furthermore, if the individual becomes eligible for SNAP benefits again after the period of disqualification is over, the state may garnish the future SNAP benefits to repay the recipient’s prior debt. However, when an individual is permanently disqualified from the program, garnishment is not possible. To encourage people to repay the benefits, one local Michigan prosecutor has established a program that offers to erase the individual’s criminal record if the individual makes full restitution through a repayment plan. The program helps collect restitution of fraud payments in all the county’s welfare programs and has had an 80 percent success rate in collecting repayments, according to the local prosecutor. States’ difficulty collecting overpayments compounds their concerns about having adequate resources for investigations because some states use recovered overpayments for this purpose. Selected states reported difficulties using FNS recommended replacement card data as a fraud detection tool, and our data analysis found that a more targeted approach may better identify potential fraud. Our testing found the recommended e-commerce monitoring tool less effective than manual searches in detecting postings indicative of potential trafficking, and we found the tool for monitoring social media to be impractical for states due to the volume of irrelevant data. Although FNS requires that states look at replacement card data as a potential indicator of trafficking, states reported difficulties using the data as a fraud detection tool. In 2012, FNS issued guidance to states based on a best practice used in North Carolina, encouraging states to review recipients who have requested four or more replacement EBT cards within 12 months because such behavior may indicate trafficking. In 2014, FNS finalized a rule that requires states to monitor replacement card data and send notices to those SNAP households requesting excessive replacement cards, defined as at least four cards in a 12-month period. All 11 states we reviewed reported tracking recipients who make excessive requests for replacement EBT cards and sending them warning letters, as required by FNS, but they have not had much success in detecting fraud through that method. At the time of our review, four states reported that they had not initiated any trafficking investigations as a result of this monitoring, and five states reported a low success rate for such investigations. One state had just started monitoring replacement card data. Only one of our selected states reported some success using the replacement card data to identify and pursue trafficking. Furthermore, although state officials recognized that some replacement card requests may be related to potential fraud, officials from 7 of the 11 states reported that the current detection approach specified by FNS often leads them to people who make legitimate requests for replacement cards for reasons such as unstable living situations or a misunderstanding of how to use the SNAP EBT card. North Carolina officials also mentioned that when they originally developed this approach currently required by FNS, it was not intended to detect trafficking. Rather, it was to help them manage the number of replacement card requests they received. FNS is aware of states’ concerns about the effectiveness of this effort, but it continues to stress that monitoring these data is worthwhile. For example, FNS officials reported that they are also aware that many replacement card requests are legitimate but they feel that the monitoring of replacement card data has an important educational component, as it allows states to identify situations where a recipient requires education on how to use their SNAP EBT card. FNS officials also reported that states have seen a reduction in households continuing to request replacement cards related to these efforts. However, FNS’s Western Regional officials reported that, given states’ experiences with the current process, it may be better for states to be more selective in sending notices. Our analysis found indicators of potential SNAP trafficking in households with excessive replacement cards, suggesting that states may be able to use replacement card data to help identify trafficking by taking a targeted approach to analyzing the data in conjunction with related transaction data. We identified 7,537 SNAP recipient households in three selected states—Michigan, Massachusetts and Nebraska—that both received replacement cards in four or more monthly benefit periods in fiscal year 2012 and made transactions considered to be potential signs of trafficking. Furthermore, as discussed below, we developed an approach for analyzing replacement card data that may provide states with a more targeted way to identify potential trafficking activity and reduce the number of households for further review by up to 40 percent. Given that states reported having limited resources for conducting investigations, a more targeted approach may enhance their ability to pursue SNAP households at higher risk of trafficking. Overall, our approach to analyzing replacement card data reduced the number of households for further review by 33 percent compared to the current FNS regulation. For the purposes of our analysis, we defined excessive replacement card households as those receiving replacement cards in four or more unique benefit periods in a year. Our approach took into account FNS’s rule that defines excessive replacement cards as at least four requested in a year. However, we further refined our analysis to consider the monthly benefit period of replacement card requests. SNAP benefits are allotted on a monthly basis, and a recipient who is selling the benefits on their EBT card and then requesting a replacement card would generally have only one opportunity per month to do so. If a SNAP recipient is requesting a replacement card because they have just sold their EBT card and its associated SNAP benefits, it is unlikely that there would be more benefits to sell until the next benefit period. As a result, additional replacement card requests in the same benefit period may not indicate increased risk of trafficking. The current FNS regulation would include households for review that received at least four replacement cards at any time in the previous year, including households receiving four cards in the same monthly benefit period. Alternatively, the number of benefit periods with replacement cards may be a better indicator of trafficking risk than simply the number of requested replacement cards. By taking into account the benefit period of replacement card requests, we significantly decreased the number of households in the three selected states that may warrant further review of potential trafficking compared to all households requesting four or more replacement cards at any time during fiscal year 2012. For example, as shown in table 3, while there were 8,190 recipient households in Michigan that received four or more replacement cards in fiscal year 2012, our approach identified 4,935 households that received replacement cards in four or more benefit periods. For the 10,266 high replacement card households we reviewed, we found that 73 percent were conducting other suspicious activities based on criteria used by FNS and state SNAP officials. We reviewed fiscal year 2012 transaction data, analyzing transactions from the same benefit period when the household received a replacement card for indications of trafficking. Specifically, we analyzed the data for trafficking indicators based on suspicious transaction types already used by FNS and state SNAP officials, such as unusually large-dollar transactions or even-dollar transactions. We tested the transaction data for six different suspicious transaction types, resulting in 22,866 transactions flagged as potential trafficking indicators. As shown in table 4, we identified 7,537 households out of those we reviewed that made at least one suspicious transaction in the same benefit period that the household received a replacement card in fiscal year 2012. These 7,537 households made over $26 million in purchases with SNAP benefits during fiscal year 2012. Overall, 84 percent of high replacement card households in Massachusetts, 65 percent in Michigan, and 63 percent in Nebraska made at least one suspicious transaction indicating potential trafficking. For more detailed information on the number of flagged transactions made by selected households in each of the three states, see appendix II. Furthermore, we found that the likelihood of suspicious transactions generally increased with the number of benefit periods in which replacement cards were requested. For example, while 60 percent of Michigan households with replacement cards in four benefit periods made at least one suspicious transaction, 86 percent of households with replacement cards in seven benefit periods had made suspicious transactions, and 100 percent of households with replacement cards in 10 or 11 benefit periods had. In Nebraska, 100 percent of households with replacement cards in eight or more benefit periods also made suspicious transactions, indicating potential trafficking. While 84 percent of households had five or fewer trafficking flags, there were 262 households, or 3 percent, with 10 or more trafficking flags. The highest number of flags for a single household was 41. This household’s flagged transactions showed suspicious large, even-dollar transactions, often at the same small grocery store. Table 5 provides examples of suspicious transactions made by this household in one benefit period. By comparing the number of benefit periods with replacement cards and the total number of transaction trafficking flags, we were able to better identify those households that may be at higher risk of trafficking. For example, as shown in figure 2, while there were 4,935 SNAP households in Michigan that received excessive replacement cards, we identified just 39 households that received excessive replacement cards and made transactions resulting in 10 or more trafficking flags. While state SNAP officials may not want to limit their investigations to such a small number of households, this type of analysis may help provide a starting point for identifying higher priority households for further review. Recognizing the challenges with the current approach, FNS officials stated that they are working on how to better link excessive replacement card requests to potential trafficking. To inform these efforts, FNS has also commissioned a study focused on detecting indications of potential trafficking by those requesting excessive replacement cards. FNS officials feel it is too early provide additional guidance or draw conclusions about the effectiveness of current efforts, but officials intend to provide additional guidance to states once they have sufficient data to inform a trafficking detection methodology that can be used nationwide. FNS provided states with guidance on installing free web-based software tools for monitoring certain e-commerce and social media websites for online sales of SNAP benefits, but some state officials from selected states reported problems with these detection tools. The tools employ Really Simple Syndication (RSS) technology, which is designed to keep track of frequently-updated content from multiple websites and automatically notify users of postings that contain key words. FNS stated that these tools could automate the searches that states would normally have to perform manually on these websites, but acknowledged that the tool for social media websites may not work well, given that these websites do not organize their posts geographically. Of the 11 states we reviewed, officials from only one selected state (Tennessee) reported that the tool worked well for identifying SNAP recipients attempting to sell their SNAP benefits online. Officials in three states—Michigan, Utah, and Florida—reported that they monitored social media websites manually because of the technical challenges they experienced with using the tools, including installation and operation. Additionally, officials in one state noted that the automated tools have placed an excessive demand on staff because they had to sift through the many false-positive leads that were generated. Officials from three of the states we reviewed reported that although they do not routinely monitor websites to detect fraud, they have found these websites to be useful sources of information about recipients they are already investigating. FNS officials acknowledge that there are limitations to the current monitoring tools, and stated that they provided these tools at the request of states to help with monitoring efforts as states had reported that manual monitoring was cumbersome and difficult given limited resources. FNS officials report that they are currently conducting a study of the effectiveness of the guidance to states and intend to make recommendations for improvements based on the results of the study. In addition to the guidance provided to states, FNS officials reported that they have contacted popular e-commerce and social media websites in the past regarding potential SNAP trafficking online, and continue to work with the websites on detecting and removing postings advertising the sale of SNAP benefits online. We tested the automated detection tools recommended by FNS on selected geographical locations covering our selected states and found them to be of limited effectiveness for states’ fraud detection efforts. A crucial element to an effective fraud prevention framework requires resources and tools to continually monitor and detect potential fraud. Our testing of the recommended automated tool for monitoring e- commerce websites found it did not detect most of the postings found through manual website searches. Furthermore, we found the automated tool for monitoring social media websites to be impractical for states’ fraud detection efforts. Although the recommended automated tool for monitoring e-commerce websites was intended to potentially replace the need for states to perform manual searches on these websites, our testing found that manual searches returned more postings indicative of potential SNAP trafficking than the automated tool, and that most of the postings detected through manual searches were not detected by the automated tool. We tested the recommended tool on one popular e-commerce website over 30 days, and monitored 19 geographical locations covering the 11 selected states. (10 hours total) monitoring for postings indicative of potential SNAP trafficking. Through our manual and automated searches, we detected a total of 1,185 postings containing one of our two key words of “EBT” or “food stamps.” Out of these 1,185 postings, we detected 28 postings indicative of potential SNAP trafficking. They advertised the potential sale of food stamp benefits in exchange for cash, services, or goods (see fig. We limited our searches to two key terms (i.e. “EBT” and “food stamps”). The use of other terms could have yielded additional or fewer posting results through our automated and manual searches. 4). postings that did not indicate trafficking as false positives. (See fig. 3.) Jacksonville and Southern Florida (FL) Boston and Worcester (MA) Maine (ME) Detroit metro and Grand Rapids (MI) Charlotte and Raleigh (NC) Lincoln and Omaha (NE) Northern New Jersey (NJ) Memphis and Nashville (TN) Houston and San Antonio (TX) Salt Lake City and Provo (UT) Wyoming (WY) North Carolina's SNAP program is county-administered. FNS has recently issued regulations and guidance and conducted a national review of state anti-fraud activities as part of its increased oversight. Despite these efforts, FNS does not have consistent and reliable data on state anti-fraud activities, primarily because its guidance to the states on what data to report is unclear. Since 2011, FNS increased its anti-fraud oversight activities, which included new regulations and guidance and a nationwide review of state agencies. Partially in response to public concerns, the Secretary for Food, Nutrition, and Consumer Services asked states to renew their efforts to combat SNAP recipient fraud, and since then FNS promulgated new regulations and provided additional guidance to direct states in these efforts. (See table 6 for details on key regulations, guidance and policy developments since 2011.) In fiscal year 2013, for the first time, FNS examined states’ compliance with federal requirements governing SNAP anti-fraud activities through Recipient Integrity Reviews. (See fig. 5 for an overview of the review components.) These assessments were conducted by FNS regional office staff and included interviews with state officials, observations of state hearing proceedings, and case file reviews in all 50 states and the District of Columbia. As part of these reviews, federal officials also analyzed information from program reports, including those from eDRS, which are used to track disqualified SNAP recipients, and the Program and Budget Summary (Form FNS-366B), which are used to report anti-fraud activities for all the states. Following these reviews, FNS regional officials issued state reports that included findings and, where appropriate, required corrective actions. FNS officials told us that timeframes for taking corrective actions varied by the problem, but they generally allow states a year to address them. FNS regional officials also acknowledged states’ noteworthy initiatives or best practices in the state reports – such as Michigan’s case management system which will improve the state’s ability to track the status and outcomes of investigations, Washington’s standardized training for investigators, and Indiana’s out-of-state usage report aimed at identifying potential trafficking by listing households that made 100 percent of their EBT transactions in another state for three months. FNS officials also reported that they provide their regional staff the opportunity to discuss such best practices during monthly teleconferences. Additionally, FNS officials present information on best practices to states during national conferences. FNS began conducting fiscal year 2014 Recipient Integrity Reviews in November 2013 and intends to complete them in September 2014. In addition to its oversight efforts, FNS has 10 studies under way that are aimed at improving federal and state efforts to address potential recipient fraud. These studies represent a significant increase in its investment to learn more about recipient fraud; specifically, FNS designated about $3 millionprior years. Among other topics, these studies are to explore strategies for improving fraud detection. For example, the study titled Social Media Fraud Discovery is intended to assess the effectiveness of FNS’s current fraud detection approach and make recommendations for improvements. There is also a series of work, known as the SNAP Recipient & Retailer Fraud Data Mining Studies, aimed at improving FNS and the states’ for this work in fiscal years 2013 and 2014, compared to none in ability to more effectively anticipate, discover and address fraudulent activity using predictive modeling. FNS expects to receive the results of these studies by September 2014. (Additional information on the 10 studies is provided in App. V.) Although states are required to regularly submit information on their anti- fraud activities to FNS, we found that these data are not reliable for ensuring program integrity and assessing states’ performance. Specifically, our review found that over half of the 2013 Recipient Integrity Review reports mentioned problems with the data states entered into eDRS, thereby affecting the information state officials used to ensure program integrity. Federal officials found that 30 states did not enter data within the federally-required timeframes, a problem that cut across each of the oversight regions. Federal officials also found that 15 states did not enter disqualification information for some cases at all. For 2 of these states, federal officials found that over 30 percent of the disqualifications mentioned in other federal reports were missing from their eDRS data. Furthermore, federal officials found that 10 states had entered data into the system inaccurately. Given the concerns with data quality, even though state officials are required to check eDRS to gather information on whether a program applicant has been disqualified in another state before issuing benefits, they are not allowed to deny an application based solely on the system’s data. Federal regulations require that states gather additional verifying information about a disqualification before denying a claim based solely on information from eDRS. FNS regional officials told us that state’s eDRS data problems stemmed from a variety of factors, including challenges with receiving timely information about administrative hearing and court decisions and transferring data to the system. To help address concerns with eDRS data quality, FNS officials are currently offering tools, guidance, and training to state and regional officials. Furthermore, states with related findings from the Recipient Integrity Reviews are expected to take corrective action, including improving communications with ADH and court officials to receive more timely information and enhance their procedures for validating data entered into the system. Through our review of the 2013 Recipient Integrity Review reports, we also found that FNS has a nationwide problem with receiving inaccurate data on state anti-fraud activities through the Program and Budget Summary Statement (Form FNS-366B), thereby potentially limiting its ability to provide oversight. We found that FNS regional officials could not reconcile the FNS-366B data reported with supporting documentation for 24 states, primarily due to data entry errors. Furthermore, some federal and state officials we interviewed recognized that there is not a consistent understanding of what should be reported on FNS-366B because the guidance from FNS is unclear. For example, on the form, FNS instructs states to report investigations for any case in which there is suspicion of an intentional program violation before and after eligibility determination. According to state and federal officials we interviewed, this information does not clearly establish a definition for what action constitutes an investigation and should then be reported on this form. Also, officials in three of the seven regional offices were not aware of FNS-sponsored training on what should be reported on this form. However, officials from the remaining four offices mentioned that FNS provided them training such as webinars and teleconferences on this form. As a result, various types of state efforts can be counted in the total number of investigations. After reviewing states’ reports, we found examples of inconsistencies in what states reported as investigations on the FNS-366B. Specifically, in fiscal year 2009, one state had about 40,000 recipient households, but reported about 50,000 investigations. During the same year, another state that provided benefits to a significantly larger population (about 1 million recipient households) reported about 43,000 investigations. Officials from the state that serves a smaller population explained that they included activities such as manually reviewing paper files provided by the state’s Department of Labor for each SNAP recipient with reported wages in the state; therefore, even if fraud was not suspected, this review was counted as an investigation. Officials from the state that serves a larger population said that they counted the number of times a potential fraud case was actively reviewed by investigators, including interviews with witnesses and researching of related client information. Given these differences, state officials said that FNS and states are not able to compare program integrity performance, because each state is not counting the same activities. In addition, by fiscal year 2012, the new head official in the state that serves a smaller population decided to use an automated system to review the wage data. Therefore, the query results identified cases indicative of a benefit overpayment, either from potential fraud or unintentional errors, were counted among the cases that needed to be investigated. As a result, for fiscal 2012, the state that serves a smaller population only reported conducting about 8,000 investigations, making this count of investigations not comparable to others for that state over time. Furthermore, these data inconsistencies could limit in FNS’s ability to identify more effective and efficient practices for state anti-fraud efforts. For example, the lack of consistent data on investigations does not allow for studying the matters, such as the cost-benefit of investigations versus fraud claims established and/or collected across states, which could be of interest to FNS and states given states’ concerns with managing investigative resources. Given the ongoing fiscal pressures that face our nation, the unprecedented increase in SNAP participation and spending in recent years has focused attention on the importance of ensuring that these publicly-funded benefits are used appropriately, and that both the federal government and state agencies have strong controls for detecting and addressing fraud. Although investigations can ultimately deter fraud and save agency resources, states we reviewed have faced the challenge of limited staff to manage a growing program and raised questions about whether federal incentive structures could be designed to better support their work. For example, even though GAO has found preventative efforts to be the most efficient and effective means to address fraud and may stop ineligible people from receiving benefits that may not be fully recovered, state officials said the current fraud-related incentive is focused on collecting overpayments. While federal officials would need to be mindful of the costs and benefits that any changes to the incentive structure would have for the overall program, absent additional incentives, states may not be taking advantage of opportunities to aggressively pursue recipient fraud. These investigative challenges have made efficient anti-fraud activities all the more critical. Although some states have questioned the efficacy of tools FNS requires or recommends for detecting SNAP benefit trafficking, some additions and refinements to the guidance for these tools could make them more effective. For example, a more targeted approach to reviewing requests for replacement benefit cards could substantially reduce the administrative burden by identifying recipients who are more likely to be misusing their benefits throughout the year. Furthermore, although FNS has tried to improve efficiency with monitoring online postings, the lack of relevant leads using the recommended tools cause others to question whether this monitoring could be done in a better way. Meanwhile, FNS is working to learn more about states’ activities and better support anti-fraud work. For example, FNS has commissioned 10 studies intended to help the agency gain knowledge on how states can better detect potential recipient fraud. However, absent additional actions from FNS, such as guidance and training to the states on how and what data to report on their fraud-related activities, these data are not likely to be as useful as they should. Specifically, without performance data that are consistent across states, FNS will not be able to determine whether certain state anti-fraud efforts may be more efficient and effective than others. FNS will need accurate and comprehensive information at the state level if it is to move forward in building a stronger national infrastructure for program integrity. The Secretary of Agriculture should direct the Administrator of FNS to take the following four actions: Explore ways that federal financial incentives can better support cost- effective state anti-fraud activities; Establish additional guidance to help states analyze SNAP transaction data to better identify SNAP recipient households receiving replacement cards that are potentially engaging in trafficking, and assess whether the use of replacement card benefit periods may better focus this analysis on high-risk households potentially engaged in trafficking; Reassess the effectiveness of the current guidance and tools recommended to states for monitoring e-commerce and social media websites, and use this information to enhance the effectiveness of the current guidance and tools; and Take steps, such as guidance and training, to enhance the consistency of what states report on their anti-fraud activities. We requested comments on a draft of this product from USDA. On July 28, 2014, the Director of the SNAP Program Accountability and Administration Division provided us with the following oral comments. FNS agreed with our recommendations and reported that efforts were underway to address each of them. Specifically, FNS reported that, although the agency cannot change the state retention rate for overpayments without a change to federal laws, it plans to issue grants in this fiscal year to support state process improvements for detecting, investigating and prosecuting recipients engaged in trafficking. Furthermore, in the next fiscal year, FNS reported that it will issue grants to support states in building information technology to strengthen recipient integrity efforts, as authorized by the Agricultural Act of 2014. FNS also reported that its commissioned studies will help inform its efforts to assist states in developing better recipient fraud detection tools, including potentially issuing new related guidance. As of May 2014, the agency had already begun to receive study results. Lastly, in May 2014, FNS also formed a working group, consisting of program integrity staff from each of the regional offices, to revamp the Form FNS-366B. Among other things, FNS reported that this group is tasked with exploring ways to clearly define the data elements on this form and adding elements that will help FNS glean better information on recipient trafficking as well as the value and impact of state anti-fraud efforts. FNS also provided technical comments, which were incorporated into the report as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Agriculture, the FNS Administrator and other relevant parties. This report will also be available at no charge on the GAO website at http://www.gao.gov . If you or your staff have any questions about this report, please contact us at (202) 512-7215 or [email protected], or (202) 512-6722 or [email protected] . Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this report were to review the following: (1) how selected state agencies combat SNAP recipient fraud; (2) the effectiveness of certain fraud detection tools recommended to states, including benefit card replacement data and e-commerce and social media website monitoring; and (3) FNS’s oversight of state anti-fraud efforts. To address these objectives, we focused on federal and state SNAP recipient anti-fraud work for fiscal years 2009 to 2014, a period after the program received additional funding through the American Recovery and Reinvestment Act of 2009 (Recovery Act). We reviewed relevant federal laws, regulations, program guidance, and reports, and we interviewed FNS officials in headquarters and all seven regional offices to address all three objectives. Specifically, to determine how selected state agencies are pursuing SNAP recipient fraud, we reviewed 11 states, where we interviewed knowledgeable state and local officials about their recipient anti-fraud work and obtained related documentation. (See below for more information on the criteria we used to select states.) We also analyzed fiscal year 2012 replacement card and transaction data for households in three of the selected states to assess the extent to which certain analyses of replacement cards could better uncover patterns of potential fraud. (See below for more information about these tests and analyses.) Further, we tested automated tools and guidance that FNS recommended to states for monitoring popular e-commerce and social media websites for postings indicative of SNAP trafficking. Our test involved determining the extent to which our 11 selected states can use these tools for their fraud detection efforts. Lastly, to determine FNS’s oversight of state anti-fraud efforts, we analyzed documents and reports relevant to FNS’s program oversight, including their fiscal year 2013 assessments of state anti-fraud work—known as Recipient Integrity Review reports—for all 50 states and the District of Columbia. All the data included in this report were assessed and determined to be sufficiently reliable for our purposes. We conducted this performance audit from April 2013 through September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine how selected state agencies are pursuing SNAP recipient fraud, we selected 11 states for our review—Florida, Maine, Massachusetts, Michigan, Nebraska, New Jersey, North Carolina, Utah, Tennessee, Texas, and Wyoming—based on geographic dispersion, SNAP payment error rates, percent of the total number of SNAP households nationwide, and the percent of recipients they reported as disqualified from the program due to non-compliance. For three of these criteria—the percent of the total number of households, the percent of the total number of disqualifications, and the payment error rates- we assigned the states to high, medium, and low categories under each set of data based on natural breaks in the data when the states were ranked from the lowest to the highest percent. As a result, states were designated based on data ranges shown in table 7. We selected the states to review for variety within our criteria. Table 8 provides information on how the selected states align with our criteria. We interviewed officials who oversee state activities in state fraud units in each of the 11 states. In some states, we also interviewed auditors and prosecutors who had knowledge of state activities. During each interview, we collected information on state policies and procedures for responding to and investigating fraud claims. We also gathered and reviewed information on how state authorities manage their investigations. We also discussed state anti-fraud efforts and common recipient fraud schemes that have been occurring in recent years. We conducted site visits in Michigan, North Carolina, and Florida and interviewed officials in the remaning eight states by telephone. The information we gathered for our report represents the conditions present at the time of the review. We cannot comment on any changes that may have occurred after our fieldwork was completed. Although the 11 states we reviewed administered SNAP for about one-third of the program’s recipient households, the information we report from these states is not generalizable. To assess the effectiveness of replacement card data as a state fraud detection tool, we analyzed replacement card data for SNAP households in 3 of the 11 selected states—Michigan, Massachusetts, and Nebraska. We selected these states to include high, medium, and low percentage of the total number of SNAP households nationwide. We obtained replacement card data from the appropriate state agency overseeing SNAP in the three selected states, and analyzed fiscal year 2012 data to determine the number of households receiving four or more replacement cards in that year. We also analyzed the data to identify households receiving replacement cards in four or more monthly benefit periods, the approach we took to identifying households with excessive replacement cards. We then obtained fiscal year 2012 transaction data from FNS for those households that received excessive replacement cards. We analyzed the transaction data for suspicious transactions indicating potential trafficking that occurred during the same benefit period when a household received a replacement card. We tested the transaction data for six different suspicious transaction types that were reported to us as commonly used by FNS and state SNAP officials to identify potential trafficking. At the request of SNAP officials to maintain confidentiality over their fraud detection methods, we did not include descriptions of all six transaction tests in the report. We assessed the reliability of replacement card and transaction data used in analyses through review of related literature, interviews with knowledgeable officials, and electronic testing of the data, and found them to be sufficiently reliable for our purposes. We installed and used the automated tools recommended by FNS pursuant to the guidance FNS released to the states for monitoring popular e-commerce and social media websites for postings indicative of SNAP trafficking. We tested the automated tools and guidance to determine the extent to which our 11 selected states can use these tools for their fraud detection efforts. We also used GAO’s Fraud Prevention Framework to assess the automated tools and guidance. Specifically, from November 22, 2013 to December 23, 2013, we spent 30 days testing the automated tool for monitoring e-commerce websites on one popular e-commerce website, comparing our automated search results against our manual search results from the same e-commerce website. Our automated and manual search queries were set to detect postings containing one of the key words “EBT” or “food stamps.” Using both search approaches simultaneously, we monitored 19 selected geographic locations covering 11 selected states, and spent an average time of about 30 minutes a day monitoring for e-commerce postings indicative of potential SNAP trafficking. Then we compared our automated results with our manual results to determine the extent to which they were the same. We selected the 19 geographic locations to monitor (see table 9, below) to include the two highest population cities in each of the 11 states. For two states—Maine and Wyoming—the e- commerce website only allowed us to monitor postings statewide. Additionally, from January 7, 2014 to January 13, 2014, we spent 5 days testing the automated tool and guidance that FNS recommended to states for monitoring social media websites on a popular social media using the same key words (“food stamps” and “EBT”). We spent an average time of about 17 minutes a day monitoring for social media postings indicative of potential SNAP trafficking. We were unable to compare the automated tool for social media websites to corresponding manual searches because, at the time of our testing, the popular social media website did not offer a capability to perform manual searches based key words, such as “EBT” and “food stamps.” As discussed above, we analyzed transaction data for households enrolled in the Supplemental Nutrition Assistance Program (SNAP) who received excessive replacement cards in three selected states— Michigan, Massachusetts and Nebraska—in fiscal year 2012. We tested the transaction data for six different suspicious transaction types, potentially indicative of trafficking. Tables 10 through 13, below, provide detailed information on the findings of these tests. As discussed above, during our 30-day testing period of the automated tool for e-commerce websites, we detected a total of 28 postings from one popular e-commerce website that advertised the potential sale of food stamp benefits in exchange for cash, services, and goods. During our 5 days of testing the automated tool for social media websites, we detected a total of 4 postings potentially soliciting food stamp benefits. The tables below summarize all the e-commerce and social media postings that we detected through the automated tools and manual searches. State experiences with using automated and manual tools Automated tools difficult to use, provides unreliable data As a result, monitoring websites only Automated tools difficult to use, provides unreliable data, not using for social media websites. Automated tools not compatible with state’s operating system Using automated tools for ecommerce websites, but looking for other tools to use for monitoring. Geographic locations GAO monitored Memphis and Nashville (TN) 1. Indirect Trafficking Fraud Discovery: A study to identify a process to effectively detect indirect trafficking schemes. An example of an indirect trafficking scheme occurs when a SNAP recipient sells his or her EBT card to a third party, at a discount for cash, and the third party uses the EBT card to purchase eligible food. 2. Social Media Fraud Discovery: An evaluation on FNS’s current fraud detection approach using social media in order to identify a more effective process. 3. Recipient Integrity Outcomes Metrics: An analysis to identify metrics that FNS can use to better monitor State performance and outcomes. 4. Multiple EBT Card Replacement: A study aimed at improving FNS’s approach to using card replacement data to identify fraud. 5. Household Link Analysis: An analysis to further assess the relationship between client and retailers regarding trafficking schemes. 6. Household Demographic Fraud Discovery: An assessment of recipient benefit data to further refine models used to detect fraud. 7. Household Time and Distance (Geospatial Analysis): An analysis of recipient benefit data that focuses on geographical information; for example, recipients using their cards in Virginia and South Carolina within an hour. 8. Identify Clients Shopping in Geographic Areas Outside of their Normal Patterns (Geospatial Analysis): An analysis to identify an automated process to assess retailer and recipient EBT data. 9. Strengthen FNS Recipient Referrals from Disqualified Stores: A review of the existing recipient referral process and fraud detection models to develop a model for FNS to deploy that more effectively identifies suspicious recipients to states for investigation. 10. SNAP Recipient & Retailer Fraud Data Mining Studies: A series of recipient and retailer based analyses using predictive activity such as fraud discovery to increase FNS and states ability to detect fraudulent activity. Kay E. Brown, 202-512-7215, [email protected]. Seto J. Bagdoyan, 202-512-6722, [email protected]. In addition to those mentioned above, the following staff members may significant contributions to this report: Kathryn Larin and Philip Reiff, Assistant Directors; Celina Davidson and Danielle Giese, Analysts-in- Charge; LaToya King, Flavio Martinez, Erik Shive and Jill Yost. Additionally, James Bennett, Holly Dye, Linda Miller, Maria McMullen and Almeta Spencer provided technical support; Shana Wallace provided methodological guidance; and Alexander Galuten and James Murphy provided legal counsel. Standards for Internal Control in the Federal Government: 2013 Exposure Draft. GAO-13-830SP. Washington, D.C.: September 3, 2013. Supplemental Nutrition Assistance Program: Payment Errors and Trafficking Have Declines, but Challenges Remain. GAO-10-956T. Washington, D.C.: July 28, 2010. Food Stamp Trafficking: FNS Could Enhance Program Integrity by Better Targeting Stores Likely to Traffic and Increasing Penalties. GAO-07-53. Washington, D.C.: October 13, 2006. Individual Disaster Assistance Programs: Framework for Fraud, Prevention, Detection and Prosecution. GAO-06-954T. Washington, D.C.: July 12, 2006. Hurricanes Katrina and Rita Disaster Relief: Improper and Potentially Fraudulent Individual Assistance Payments Estimated to be between $600 Million and $1.4 Billion. GAO-06-844T. Washington, D.C.: June 14, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington, D.C.: February 13, 2006. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1 Washington, D.C.: November 1999.
|
In fiscal year 2013, SNAP, the nation's largest nutrition support program, provided about 47 million people with $76 billion in benefits. Fraud, including trafficking—the misuse of program benefits to obtain non-food items—has been a long-standing concern, and technology has provided additional opportunities to commit and combat such activities. State agencies are responsible for addressing SNAP recipient fraud under the guidance and monitoring of FNS. GAO was asked to review state and federal efforts to combat SNAP recipient fraud. GAO reviewed: (1) how selected state agencies combat SNAP recipient fraud; (2) the effectiveness of certain state fraud detection tools; and (3) how FNS oversees state anti-fraud efforts. GAO reviewed relevant federal laws, regulations, guidance, and documents; interviewed officials in 11 states; interviewed federal officials; tested fraud detection tools using fiscal year 2012 program data; and monitored websites for potential trafficking online. Although results are not generalizable to all states, the 11 states, selected based on various criteria including the size of their SNAP recipient household population and their payment error rates, served about a third of SNAP recipient households. The 11 states GAO reviewed employed a range of detection tools, but experienced mixed success investigating and pursuing cases to combat potential Supplemental Nutrition Assistance Program (SNAP) recipient fraud. States reported using detection tools required or recommended by the Food and Nutrition Service (FNS), such as matching recipient data against prisoner and death files. However, most of selected states reported difficulties in conducting fraud investigations due to either reduced or maintained staff levels while SNAP recipient numbers greatly increased from fiscal year 2009 through 2013. Some state officials suggested changing the financial incentives structure to help support the costs of investigating potential SNAP fraud. For example, investigative agencies are not rewarded for cost-effective, anti-fraud efforts which prevent ineligible people from receiving benefits at all. GAO found limitations to the effectiveness of recommended replacement card data and website monitoring tools for fraud detection. FNS requires states to monitor SNAP households that request at least four cards per year, but selected states reported limited success detecting fraud this way. GAO's analysis found potential trafficking in 73 percent of households reviewed by focusing on SNAP households requesting cards in at least four monthly benefit periods. Benefits are allotted monthly, and a recipient selling their benefits and then requesting a new card would generally have one opportunity per month to do so. As a result, additional card requests in the same benefit period may not indicate increased risk of trafficking. Additionally, GAO found the FNS recommended e-commerce website monitoring tool to be less effective than manual searches in detecting posts indicative of SNAP trafficking. GAO found the recommended tool for monitoring social media to be impractical due to the volume of irrelevant data. FNS has increased its oversight of state anti-fraud activity in recent years by issuing new regulations and guidance, conducting state audits, and commissioning studies on recipient fraud since fiscal year 2011. Despite these efforts, FNS does not have consistent and reliable data on states' anti-fraud activities because its reporting guidance lacks specificity. For example, the guidance from FNS did not define the kinds of activities that should be counted as investigations, resulting in data that were not comparable across states. Additional oversight efforts, such as providing guidance to states for reporting consistent data, could improve FNS's ability to monitor states and obtain information about more efficient and effective ways to combat recipient fraud. GAO recommends, among other things, that FNS reassess current financial incentives and detection tools and issue guidance to help states better detect fraud and report on their anti-fraud efforts. Agency officials agreed with our recommendations.
|
Viewed broadly, IDT refund fraud is composed of two crimes: (1) the theft or compromise of PII, and (2) the use of stolen (or otherwise compromised) PII to file a fraudulent tax return and collect a fraudulent refund. Identity theft. The sources of stolen identities are limitless, according to an official in IRS’s Criminal Investigation Division. Identity thieves can hack into government or commercial systems, recruit insiders (such as employees in the healthcare or education industries) to steal PII, or purchase or put pieces of PII together to create an identity (see figure 1). To successfully commit identity theft, thieves can exploit specific digital, physical, or social vulnerabilities (see sidebar). According to Department of Justice (DOJ) officials, the PII used in tax refund fraud can also involve compromised identities, where the real identity holder initially sells his identity to third parties. Identity Theft and Personally Identifiable Information (PII) Vulnerability: An Overview PII is vulnerable to theft and exploitation in three broad areas. Digital vulnerability: Criminals can access large amounts of digital information if it is inadequately safeguarded. For example, thieves can steal it through hacking and computer intrusion, can aggregate publicly available information, or can sell and buy PII from other criminals on the black market. In one case, a foreign national obtained PII from online databases and sold it to other criminals, resulting in 13,673 victims and $65 million claimed in refund fraud. Physical vulnerability: If insufficient attention is paid to the structures and tools used to store, maintain, and safeguard PII, such as hard drives, paper records, or unsecured mailboxes, thieves will exploit these vulnerabilities through computer theft and “dumpster diving.” Social vulnerability: Thieves can trick individuals into divulging their own PII or others’ PII, for example by impersonating IRS officials. Thieves may also recruit individuals with legitimate access to sensitive information. In one case, a ring of thieves used its employment access to steal identities from public and private databases, such as the U.S. Army, several Alabama state agencies, a Georgia call center and employee records from a Georgia company. IRS recognized the challenge of IDT refund fraud in its fiscal year 2014- 2017 strategic plan and increased resources dedicated to combating IDT and other types of refund fraud. In fiscal year 2015, IRS reported that it staffed more than 4,000 full-time equivalents and spent about $470 million on all refund fraud and IDT activities. The administration requested an additional $90 million and 491 full-time equivalents for fiscal year 2017 to help prevent IDT refund fraud and reduce improper payments. IRS estimates that this $90 million would help it protect an additional $612 million in revenue in fiscal year 2017, as well as protect revenue in future years. The Consolidated Appropriations Act, 2016, appropriated IRS an additional $290 million for improvements to customer service, IDT identification and prevention, and cybersecurity efforts. The IRS spending plan indicates that officials will use this funding to (1) reduce the wait times and improve the performance on IRS’s Taxpayer Protection Program/Identity Theft Toll Free Line, and (2) improve network security and protect taxpayer data from unauthorized access by identity thieves, among other things. To detect and prevent IDT refund fraud, IRS has developed tools and programs, including: IDT filters: IRS uses automated filters that search for IDT refund fraud characteristics to identify suspicious returns during processing and to confirm taxpayers’ identities before issuing refunds. These characteristics are based on both IRS’s knowledge of previous refund fraud schemes and clusters of returns with similar characteristics. Taxpayer Protection Program. The Taxpayer Protection Program (TPP) reviews returns that are flagged by IRS’s IDT filters. IRS asks taxpayers to authenticate their identities—either online or by phone— by answering questions that a legitimate taxpayer is likely to know, such as previous addresses, mortgage information, and data about family members. If the taxpayer fails to authenticate himself online or by phone, IRS instructs the respondent to authenticate his identity in person at an IRS Taxpayer Assistance Center. Identity Protection Personal Identification Number (IP PIN): IP PINs are single-use identification numbers sent to IDT victims who have authenticated their identities with IRS. If a return is electronically filed (e-filed) for a Social Security Number assigned an IP PIN, it must include the IP PIN or else IRS will reject the return. If a paper return has a missing or incorrect IP PIN, IRS delays processing the return while the agency determines if it was filed by the legitimate taxpayer. As a result of an ongoing security review, IRS temporarily suspended the IP PIN tool in March 2016 while it assesses how to further strengthen its security features. IRS also works with third parties, such as industry, states, and financial institutions, to try to detect and prevent IDT refund fraud. In March 2015, the IRS Commissioner convened a Security Summit with industry and states to improve information sharing and authentication. IRS officials said that 40 state departments of revenue and 20 tax industry participants have officially signed on to the partnership. IRS is investing $16.1 million for identity theft prevention and refund fraud mitigation actions that come out of the Security Summit. These efforts include developing an Information Sharing and Analysis Center where IRS, states, and industry can share information to combat IDT refund fraud. IRS monitors the extent of IDT refund fraud through its Taxonomy. This research-based effort aims to report on the effectiveness of IRS’s IDT defenses to internal and external stakeholders, help IRS identify IDT trends and evolving risks, and refine IDT filters to better detect potentially fraudulent returns, while reducing the likelihood of flagging legitimate tax returns. As shown in figure 3, IRS’s Taxonomy estimates the number of identified IDT refund fraud cases where IRS (1) prevented or recovered the fraudulent refunds (turquoise band), and (2) paid the fraudulent refunds (purple band). IRS breaks these estimates into categories corresponding to IDT detection strategies, which occur at three key points in the life cycle of a tax refund: before accepting a tax return, during return processing, and post refund. IRS creates the Taxonomy’s estimates through sources including IRS’s Refund Fraud & Identity Theft Global Report (Global Report) and a modeling data set composed of known IDT returns and potential identity theft returns. In response to our recommendation in January 2015, IRS began using the modeling data set to improve Taxonomy estimates for refunds it paid (Categories 4 and 5 in figure 3 above). According to IRS officials, the agency developed its modeling data set to explore IDT characteristics and build the models within its IDT filters to help identify and protect against IDT refund fraud. The modeling data set consolidates data on known and potential IDT returns from various IRS systems and programs. Figure 4 shows IRS’s estimates of attempted IDT refund fraud for 2014. IRS estimates that it prevented or recovered $22.5 billion in IDT refunds. For the cost of IDT refunds paid, IRS estimated a range of values; the $3.1 billion estimate for IDT refunds paid represents the upper bound of IRS’s range for IDT refunds paid. However, IRS recognizes that there is imprecision in these estimates. Further, there is uncertainty in these estimates, as will be discussed later. One of IRS’s key defenses in reducing the risk of IDT refund fraud is TPP, which is intended to verify the identities of suspicious filers. TPP has procedures that help IRS authenticate legitimate taxpayers by requiring filers to answer questions only legitimate taxpayers are likely to know, or in some instances, checking information reported on filers’ returns with information reported by third parties, such as W-2s. Figure 5 illustrates the TPP process. Of the 650,000 filers who responded to TPP notification letters, 450,000 (69 percent) attempted remote authentication—online or by phone— whereas 200,000 (31 percent) claimed to be victims of IDT who had not filed the selected returns. To pass remote authentication, filers first complete “identity proofing” by providing basic identifying information such as their names and dates of birth. Next, they are asked to answer knowledge-based authentication questions obtained from a third-party provider. Examples of authentication questions are “Who is your mortgage lender?” or “Which of the following is your previous address?” If filers pass knowledge-based remote authentication, then IRS releases those filers’ returns for further processing before issuing refunds. If filers cannot pass, IRS will not issue a refund unless those filers pass in-person authentication or IRS receives information return documents from third parties, such as W-2s, that match filers’ return data. Officials stated that TPP authentication poses a challenge to IRS because it must authenticate almost all taxpayers in the program remotely. According to a United States Digital Service (USDS) report, it is costly for fraudsters to attempt in-person authentication at scale because it requires human interaction. As a result, when compared to in-person authentication, fraudsters are incentivized to remotely authenticate because it allows for multiple attempts, allowing the fraudster more opportunity to access the taxpayers’ information more quickly and easily respond to authentication questions. IRS has conducted research both to evaluate the effectiveness of existing TPP authentication procedures and to identify options for strengthening those procedures. Based on research efforts, IRS made improvements to its phone authentication options for filing season 2015. For example, IRS created a more challenging High Risk Authentication (HRA) quiz, which requires taxpayers to recall information from past tax filings. Prior to the 2015 filing season, IRS’s HRA quizzes sometimes included simulated questions where IRS effectively had no data available to support correct answers other than “none of the above.” For example, a simulated question might ask a filer to identify the date of birth of a dependent even though that filer had no dependent. For the 2015 filing season, IRS eliminated these questions from HRA quizzes. IRS analysis has shown that simulated questions are easier to pass than questions based on taxpayer data. In addition, IRS required some respondents to answer a higher proportion of HRA questions correctly in the 2015 filing season. Of the about 1.6 million returns selected for TPP processing in filing season 2015, IRS estimated that it potentially paid about $30 million to IDT fraudsters who filed about 7,200 returns that passed TPP authentication. However, our analysis indicates that IRS underestimated how many fraudulent IDT returns passed TPP authentication. In developing its estimates, IRS first compared TPP selections to information returns provided by third parties, such as W-2s. IRS next identified which TPP selections passed authentication but had large mismatches with information returns. IRS then manually reviewed a sample of these returns to approximate how many returns that passed authentication were filed by likely IDT fraudsters. IRS used this finding to estimate the total number and value of refunds potentially paid to IDT fraudsters who passed TPP authentication. IRS likely underestimated how many fraudulent IDT returns passed TPP authentication because the agency did not include potential IDT returns that closely matched information returns. Though based on a nongeneralizable sample, past IRS research suggests that some IDT fraudsters are able to both file tax returns that closely match information provided by third parties and pass TPP authentication. By omitting some IDT returns from its estimates, IRS likely overestimated the effectiveness of TPP defenses. IRS officials told us that they did not include close matches in their analysis because it is challenging to determine how many of these returns are filed by IDT fraudsters, and IRS does not want to present estimates based on assumptions that could be inaccurate. In March 2016, IRS officials acknowledged the desirability of expanding their estimate to include a more generalizable sample of those who successfully passed authentication and said that they will consider doing so as staff are available to do so after the filing season. While we cannot quantify the specific amount by which IRS’s analysis underestimated the number of fraudulent IDT returns that passed TPP authentication, we conducted a scenario analysis to demonstrate the effect of omitting potential IDT returns on IRS’s estimates. If we assume that 5 to 10 percent of close matches passing authentication were filed by potential IDT fraudsters, we estimate that the value of refunds potentially paid to IDT fraudsters who passed TPP authentication could be as high as between $116 and $203 million in the 2015 filing season. We chose to not base our analysis on IRS’s past research (cited in the previous paragraph) because it used a nongeneralizable sample and because its methodology for identifying close matches changed from 2014 to 2015. Our analysis indicates that, even if a small proportion of close matches that pass TPP authentication are filed by IDT fraudsters, accounting for these selections can substantially affect IRS’s estimates because close matches represent about 91 percent of all returns filed by individuals who passed authentication. Further, the extent of IRS’s likely underestimation suggests that TPP’s authentication procedures may be at greater risk of exploitation by IDT fraudsters than suggested by IRS’s estimates. To verify taxpayers’ identities remotely, TPP uses single-factor authentication procedures that incorporate one of the following authentication elements: “something you know,” “something you have,” or “something you are.” TPP’s single-factor authentication procedures are at risk of exploitation because some fraudsters obtain the PII necessary to pass the questions asked during authentication. According to IRS officials, criminals can find personal information needed to pass authentication by searching records available through the Internet or purchasing it from websites designed to conceal their content. USDS has also reported that implementing effective authentication procedures has become more challenging because criminals are able to pass authentication checks at similar rates to legitimate users due to the wide availability of personal information. Similar to TPP, IRS used single-factor authentication procedures to authenticate users of its Get Transcript service, which fraudsters defeated in 2014 to 2015, as well as its IP PIN tool that IRS temporarily suspended due to security concerns in 2016. Both USDS and TIGTA have found that IRS needs to take a stronger approach to authenticating Get Transcript users. Though IRS is undertaking efforts to strengthen Get Transcript authentication, agency officials said they are still working to determine if improvements are necessary for TPP. Because IRS must ensure legitimate taxpayers can successfully authenticate, the agency faces challenges in making remote authentication more difficult for IDT fraudsters who often possess the PII needed to appear to be legitimate taxpayers. IRS officials said it was important for TPP to minimize delays in refund processing for large numbers of legitimate taxpayers and to avoid the appearance of discriminating against specific types of filers. For example, IRS could designate all TPP filers whose return information may be harder to verify for more challenging authentication; however, IRS officials said the agency wanted to avoid the appearance of discriminating against these filers who, on average, report lower income. In addition, IRS could delay refunds for these respondents until IRS could match these selections’ return data against information provided by third parties. Because delaying refunds is likely to burden taxpayers, IRS officials said large- scale delays were not feasible. Although IRS conducted a risk assessment for TPP authentication in October 2012, the agency has not updated this assessment to reflect the current threat of IDT refund fraud—specifically, the threat that some fraudsters possess the PII necessary to pass authentication questions. In conducting its risk assessment, IRS determined that improper authentication through TPP posed low or moderate risks to both the agency and taxpayers, and therefore required no more than single-factor authentication. Since IRS conducted its original risk assessment for TPP, TIGTA conducted a more recent risk assessment of Get Transcript and determined that Get Transcript should have required multi-factor rather than single-factor authentication. Given that both programs pose similar risks—fraudsters can use vulnerabilities in both Get Transcript and TPP to more easily obtain tax refunds—it seems likely that IRS would identify a higher authentication standard for TPP when updating that program’s risk assessment. In March 2016, IRS officials stated that they were planning to conduct a risk assessment and make improvements to TPP based on the results. However, their plans for a risk assessment are not documented yet because the Identity Assurance Office has prioritized improving authentication for Get Transcript service and the IP PIN tool before TPP. Office of Management and Budget (OMB) e-authentication guidance directs agencies to conduct risk assessments on information technology systems that remotely authenticate users and to identify appropriate assurance levels. Agencies then select authentication technologies based on the levels of assurance needed and e-authentication technical guidance provided by the National Institute of Standards and Technology (NIST). Senior IRS officials stated that they disagreed that OMB guidance and NIST e-authentication standards are applicable to TPP phone authentication. However, we believe the guidance and standards are applicable because TPP uses similar processes (e.g., knowledge- based authentication questions) to remotely authenticate taxpayers— whether taxpayers themselves type in answers to questions online or whether the taxpayer answers the questions over the phone and IRS Customer Service Representatives import the information into an Internet application to check those answers. Following a consistent standard for both online and phone authentication would also help prevent IDT fraudsters from shifting authentication attempts to the option that requires a less rigorous standard. In addition, federal internal control standards, best practices for risk management, and IRS’s own risk management guidance require or recommend that agencies regularly assess risks to their programs. Standards for Internal Control in the Federal Government state that agency management should assess the risks that the agency faces from both external and internal sources. Best practices in risk management recommend that fraud risk assessments generally include assessing risks’ likelihoods and impacts, determining the agency’s risk tolerance, and examining the suitability of existing fraud controls. In addition, they recommend that agencies plan regular fraud risk assessments, since allowing extended periods to pass between assessments could result in control activities that do not effectively address a program’s risks. IRS’s Enterprise Risk Management Program: Concept of Operations also states that IRS’s Office of the Chief Risk Officer is likewise committed to timely risk reporting. By conducting an updated risk assessment for TPP in accordance with e- authentication and risk management standards, IRS could identify appropriate opportunities to strengthen TPP authentication and prevent IDT fraudsters from passing and potentially receiving millions of dollars in refunds. Depending on the assessment’s results, IRS could implement stronger authentication procedures. For example, a multi-factor authentication standard for TPP’s remote authentication options would utilize a second element to authenticate filers, such as requiring filers to provide proof of “something they have” in addition to testing “what they know.” Strengthening TPP authentication could help IRS prevent millions of dollars from being paid to IDT fraudsters each filing season. In addition, strengthening TPP could improve IRS’s return on investment for fraud filters by ensuring that efforts to flag fraudulent returns result in fewer refunds paid to IDT fraudsters. Fewer legitimate taxpayers would also become victims of IDT refund fraud if TPP stopped more IDT refund fraud returns. In response to recommendations made in our previous report, IRS is working to improve Identity Theft Taxonomy (Taxonomy) estimates of IDT fraud. In that report, we found that IRS’s 2013 Taxonomy estimates met several GAO Cost Estimating and Assessment Guide (GAO Cost Guide) best practices, such as regularly updating the methodology to better reflect evolving fraud schemes. However, we also found limitations and recommended that IRS improve the estimates by (1) reporting the inherent imprecision and uncertainty of estimates, and (2) documenting the underlying analyses justifying cost-influencing assumptions. IRS reported that the agency is working to implement these recommendations by October 2016. Given the challenges inherent in estimating fraudulent activity and the evolving nature of fraud schemes, IRS’s efforts to improve Taxonomy estimates are likely to be ongoing. For example, to estimate potential IDT refund fraud paid, IRS compares the information reported on tax returns with data reported by third parties on information returns, such as Form W-2, Wage and Tax Statement (W-2). However, it is difficult for IRS to determine whether discrepancies between data reported on the tax return and information returns are due to IDT, a mistake made by the legitimate taxpayer, or other types of fraud committed by the legitimate taxpayer. Moreover, IRS cannot accurately estimate amounts of undetected fraud because of situations when it has no reported information to verify income. Furthermore, to better reflect evolving IDT refund fraud schemes, IRS updates the Taxonomy methodology over time. While these updates may result in more accurate estimates, these changes confound making comparisons between filing seasons. To assess IRS’s efforts to implement our past recommendations, we reviewed IRS’s 2014 estimates focusing on those best practices that we assessed as “partially met” or less in our review of the 2013 Taxonomy. While IRS is not required to follow the GAO Cost Guide best practices, it could help IRS meet OMB and its own information quality guidelines and improve the reliability of IDT refund fraud estimates. Our assessments— summarized in table 1—note places where IRS has taken steps to improve the estimates and places where IRS can take additional action to further improve its estimates. Our assessment ratings show IRS made progress in one area and took a step back in another area compared to 2013. The ratings remained unchanged in four areas. As noted in table 1 above, IRS improved the Taxonomy to meet one of the best practice characteristics—management review. By reviewing and approving the 2014 Taxonomy estimates and the new methodology, IRS management completed a vital step in verifying how estimates were developed. This step helps ensure that management understands the estimate’s underlying risks, data sources, and methods so that they are confident that the estimates are accurate, complete, and high in quality. In the following sections, we analyze in greater detail IRS’s efforts to meet best practices outlined in table 1 as well as Taxonomy estimates’ remaining limitations. IRS adopted a new methodology to improve 2014 Taxonomy estimates of refunds paid. This new methodology uses the modeling data set, which is based on individual return-level information, to estimate more precisely how much the agency paid to IDT fraudsters. The modeling data set is an improvement over previous data sources that were based on aggregated data. As a result of this improvement, officials can more precisely estimate known IDT refunds paid. Additionally, IRS uses the modeling data set when calculating estimates of likely IDT refunds paid. IRS defines likely IDT returns as returns where the information on the tax return does not match either (1) the current year’s information reporting (i.e., information on a W-2); or (2) specific prior-year tax return characteristics. While the data source used to estimate IDT refunds paid is an improvement from the previous Taxonomy, IRS’s methodology for calculating estimates for refunds paid excludes select categories of returns that can bias results. A key assumption IRS uses when building its modeling data set for IDT refunds paid is the amount of the refund. As part of its methodology, IRS omits some returns with refund amounts that fail to meet specific refund thresholds from its fraud estimates and does not include all relevant returns in its analysis. IRS officials said IRS uses the thresholds because it wants to prioritize IRS enforcement efforts. In February 2016, IRS officials stated that they did not know how many returns were excluded from the Taxonomy. In March 2016, IRS officials said that they are evaluating the extent to which omitted returns met other criteria associated with IRS’s definitions of known and likely IDT refund fraud. According to its Strategic Plan, IRS should identify trends, detect high-risk areas of noncompliance, and prioritize enforcement approaches by applying research and advanced analytics. Further, the GAO Cost Guide states that analysis should be regularly updated to reflect significant changes in the methodology and should include all relevant costs. While thresholds may help IRS prioritize enforcement efforts on likely IDT fraud schemes, they limit IRS’s ability to estimate the entire population of IDT refunds paid. Further, incomplete Taxonomy estimates could impede IRS and congressional efforts to assess the effectiveness of its IDT defenses over time. In response to our discussion, IRS officials said that they are considering removing some thresholds and including those returns when calculating estimates of IDT refunds paid for the 2015 Taxonomy estimates. We also found accuracy issues with IRS’s estimate of IDT refunds prevented that are likely to result in overestimates. To produce this estimate, IRS uses the Global Report, which overestimates the amount of IDT refunds prevented because it overcounts some IDT returns. Overcounting occurs because the Global Report aggregates return-level data to create a monthly inventory of confirmed IDT returns. According to IRS officials, the Global Report counts each time a return is caught by IRS defenses as a separate instance of refund fraud. For example, if an IDT return is flagged as IDT in both IRS’s Electronic Fraud Detection System and its Dependent Database, this return is counted as two IDT returns, even though it is the same return. E-file rejects are also overcounted because a single return can be rejected multiple times. IRS officials noted that there would be benefits of using return-level data to estimate refunds prevented in the Taxonomy, such as avoiding overcounting. However, officials said they use the Global Report to develop estimates of prevented IDT refund fraud because it represents IRS’s official record of IDT fraud and because IRS has invested substantial resources in improving the report. We agree that the Global Report is an important investment for monitoring the effectiveness of IRS’s many defenses against fraud, both individually and as a system; however, overcounting the incidence of fraud inflates IRS’s Taxonomy estimates of the cost of IDT refund fraud, and could potentially bias resource allocation and other decisions. For example, if IRS thinks it is catching 90 percent of estimated IDT refund fraud attempts, agency officials may decide to allocate resources differently than if IRS is, in fact, catching 50 percent. Our data reliability testing found that the Global Report’s counts for known IDT returns where IRS prevented the refund were larger than the counts from the modeling data set. Of the estimated $22.5 billion refunds prevented or recovered in 2014, the Global Report included 2.0 million returns worth $11.4 billion in its known IDT return population, whereas the modeling dataset included 1.6 million returns and $8.9 billion in its population. Officials acknowledged that they also believe this discrepancy is due to the overcounting in the Global Report and could also be caused by the modeling dataset’s exclusion of returns that fail to meet specific refund thresholds, as described above. As noted earlier, IRS’s Strategic Plan notes that IRS should “identify trends, detect high-risk areas of noncompliance, and prioritize enforcement approaches by applying research and advanced analytics.” It also states that IRS should strengthen refund fraud prevention by bolstering analytics capability, making full use of existing data sources, and exploring potential new data sources and techniques. Further, the GAO Cost Guide states that estimates be based on primary data sources and contain few mistakes. By using aggregated data to develop the Global Report, the agency’s official record of IDT returns is less accurate than if IRS used return-level data. Further, by using the Global Report to calculate Taxonomy estimates for refunds prevented, IRS may have overestimated the $22.5 billion in refunds prevented or recovered in the 2014 filing season. As described above, inaccurate Taxonomy estimates could impede decision makers’ ability to monitor the effectiveness of IDT defenses. In its Taxonomy documentation, IRS notes most—but not all— assumptions used to make estimates. For example, IRS does not document that refunds outside of thresholds, as described above, are excluded from IRS’s estimates of refunds paid by IRS. In addition, IRS does not always provide rationales or analyses to support the assumptions it does document. For example, IRS does not provide a rationale for the average refund value used to estimate the cost of electronically filed returns that IRS rejects (i.e., e-file rejects) and categorizes as IDT returns, which affects the total value of IRS’s refunds prevented estimates. Our analyses show that using different refund assumptions can affect the refunds prevented estimate by billions of dollars. Because IRS does not have reliable data on the refund values associated with e-file rejects, it uses the average refund value of returns detected by various IDT defenses. As noted in table 2, IRS’s estimate assumes the average refund value for all IDT defenses ($5,959), which results in $7.3 billion dollars prevented on 1.2 million e-filed returns. However, the average refund value of e-file returns detected by IRS IDT defenses varies— indicating uncertainty in the estimates. For example, if IRS used the different average refunds in table 2 to develop its e-file reject estimate, the total could range from $4.1 billion to $7.5 billion. We previously recommended that IRS document the analysis underlying the cost-influencing assumptions. As stated above, IRS officials told us they are working to implement this recommendation by October 2016. Given the evolving nature of IDT refund fraud, documenting Taxonomy assumptions and the rationales used to develop those assumptions in accordance with our prior recommendations would enable IRS management and policymakers to determine whether the assumptions remain valid or need to be revised or updated. IRS is still working to improve its reporting of the inherent imprecision and uncertainty of its Taxonomy estimates. Previously, we found that IRS presented 2013 Taxonomy estimates as point estimates, which did not represent the Taxonomy’s inherent uncertainty. We recommended that IRS report the inherent imprecision and uncertainty of the estimates and noted that one way IRS could do this would be to present a range of values for its Taxonomy estimates. High-quality cost estimates usually fall within a range of possible costs, with the point estimate between the best and worst case extremes. Having a range of costs around a point estimate is more useful to decision makers because it indicates the uncertainty in the estimates by conveying its level of confidence or by conveying the level of confidence of the most likely cost. Knowing the uncertainty related to Taxonomy estimates could affect different decisions about how to allocate resources to combat IDT refund fraud. For example, if there is 80 percent confidence in IRS’s estimates, then decision makers may make different decisions than if there is 50 percent confidence in the estimates. Under its revised methodology, IRS partially addressed our previous recommendation by presenting refunds-paid estimates as a range rather than a single point estimate to reflect the uncertainty in IRS’s estimate of the revenue lost to IDT refund fraud. In addition, IRS took steps to incorporate better quality data into its refunds paid estimate by utilizing both the modeling data set’s return-level information and results from a new sampling effort. However, these ranges may not give decision makers a truly accurate understanding of what IRS knows and does not know about IDT refund fraud because they are not derived from a cost risk and uncertainty analysis. Such an analysis accounts for the cumulative impact that multiple assumptions might have on IRS’s estimates. For example, ranges do not account for uncertainty regarding the extent to which IRS’s estimates account for all IDT fraud schemes. Additionally, IRS officials manually review some returns to determine whether or not the returns are IDT or non-IDT returns. IRS’s ranges also do not account for the uncertainty or the risk that manual reviewers may not accurately characterize returns as IDT returns and non-IDT returns. In addition, IRS does not conduct a sensitivity analysis for Taxonomy categories that include assumptions. A sensitivity analysis reveals critical assumptions and cost drivers that most affect estimate results, and can help managers take steps to ensure the estimates’ quality. By conducting a sensitivity analysis, IRS will know which assumptions and which factors affect the Taxonomy the most so IRS can devote resources to combating IDT refund in those areas and work to make the estimates more accurate in those areas. Until IRS addresses our prior recommendations and provides an indication of uncertainty in the Taxonomy estimates, the false sense of precision could affect decisions about how to allocate resources to combat IDT refund fraud. IRS officials told us in February 2016 that they plan to conduct a sensitivity analysis and a risk and uncertainty analysis for the assumptions that are used when IRS calculates the updated Taxonomy estimates for 2015. IRS’s continued efforts to improve TPP are critical to combatting IDT refund fraud. Though IRS has made improvements to TPP, evidence suggests that the agency’s efforts to authenticate taxpayers in filing season 2015 may not have kept pace with the evolving threat of IDT refund fraud. Since IRS last conducted a risk assessment for TPP, PII has become more widely disseminated, and IRS has changed TPP procedures. In addition, though IRS is undertaking efforts to strengthen Get Transcript, a program that poses risks similar to TPP, IRS has not determined whether authentication improvements are necessary for TPP. Documenting time frames and conducting an updated e-authentication risk assessment for TPP’s remote authentication options would enable IRS to identify opportunities and take actions to strengthen TPP authentication in accordance with appropriate standards. In turn, strengthened authentication would help IRS reduce revenue lost to IDT fraudsters, improve the efficiency of fraud filter investments, and reduce the number of legitimate taxpayers who become victims of IDT refund fraud. IRS’s monitoring of the extent of IDT refund fraud is key to supporting decision makers’ ability to determine how to combat IDT refund fraud. IRS has invested a considerable effort in monitoring and reporting the extent of IDT refund fraud through its Taxonomy estimates. However, the accuracy of IRS’s IDT refund fraud reporting in the Taxonomy estimates could be improved. For example, using return-level data, such as the modeling data set, could improve the accuracy of the Taxonomy’s refunds paid estimates. More accurate Taxonomy estimates would help IRS better understand how and to what extent IDT refund fraud is evading IRS defenses. This would allow it to focus attention on where the risk is greatest and improve the design of its IDT filters. Additionally, reducing overcounting and ensuring all relevant IDT returns—even those that fail to meet specific refund thresholds—are included in Taxonomy estimates could help IRS communicate more accurate information on the amount and cost of IDT refund fraud to decision makers. Finally, implementing our past recommendations will help IRS further improve the reliability of its estimates. To further deter noncompliance in the Taxpayer Protection Program, we recommend that the Commissioner of Internal Revenue take the following two actions in accordance with OMB and NIST e-authentication guidance: 1. conduct an updated risk assessment to identify new or ongoing risks for TPP’s online and phone authentication options, including documentation of time frames for conducting the assessment, and 2. implement appropriate actions to mitigate risks identified in the assessment. To improve the quality of the Taxonomy’s IDT refund fraud estimates, we recommend that the Commissioner of Internal Revenue take the following two actions: 1. remove refund thresholds from criteria used to develop IRS’s refunds- paid estimates, and 2. utilize return-level data—where available—to reduce overcounting and improve the quality and accuracy of the refunds-prevented estimates. We provided a draft of this product to the Commissioner of Internal Revenue, the Attorney General, and the Director of the Federal Bureau of Investigation for review and comment. In its written comments, reproduced in appendix III, IRS agreed with our TPP recommendations and neither agreed nor disagreed with our Taxonomy recommendations. IRS stated that it will conduct an updated risk assessment for TPP’s online electronic authentication application, in accordance with OMB and NIST guidelines. Regarding TPP’s phone authentication option, IRS reported that a portion of the telephone authentication option will be included in the assessment because IRS employees use a web interface. As noted in the report, we believe that following a consistent authentication standard for both online and phone authentication would help prevent IDT fraudsters from shifting authentication attempts to the option that requires a less rigorous standard. IRS officials stated that they will implement mitigation actions identified during the assessment, to the degree feasible. We continue to emphasize the importance of implementing appropriate actions to mitigate identified risks because doing so would improve TPP authentication and prevent additional fraudulent refunds from being issued. Consistent with our recommendation, IRS stated that it has reduced the lower threshold used to develop its IDT refund-paid estimate in its 2014 modeling dataset. IRS did not change its upper threshold. IRS also stated that the risk of this remaining threshold excluding relevant IDT returns is mitigated because IRS manually reviews such returns. We support IRS’s reduction of the lower threshold and its manual review of high-value refunds. With regard to our recommendation to use return-level data to reduce overcounting and improve the accuracy of the refunds-prevented estimate, IRS officials said that they are discussing the impact of the recommendation and determining if it is feasible to implement. As previously noted, it is important for IRS to provide accurate estimates of the IDT fraud it prevented or recovered. By not using return-level data, the Global Report overcounts some IDT returns. As a result, IRS is providing Congress and other stakeholders with overestimates of the amount of IDT refund fraud it prevented or recovered. The Department of Justice provided technical comments for itself and the Federal Bureau of Investigation, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of Internal Revenue, the Attorney General of the United States, and the Director of the Federal Bureau of Investigation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report (1) evaluates the performance of Internal Revenue Service’s (IRS) Taxpayer Protection Program (TPP) and (2) assesses IRS efforts to improve its estimates of identity theft (IDT) refund fraud costs for 2014. The report discusses IDT refund fraud and not employment fraud. Detailed information on IRS’s enforcement efforts was excluded from the report because of sensitivity concerns. To evaluate TPP’s performance, we reviewed IRS studies designed to identify and support ongoing identity authentication refinements to TPP. We compared specifics of IRS’s TPP against relevant guidance on enterprise risk management, electronic authentication, and internal controls. We assessed IRS’s TPP analysis by (1) reviewing relevant IRS documentation, (2) conducting manual testing to identify obvious errors, and (3) interviewing IRS officials. During the course of our work, we found that IRS likely underestimated the value of refunds issued to IDT fraudsters in filing season 2015 via TPP because the agency did not account for all refunds potentially paid to IDT refund fraudsters who passed TPP authentication. To assess how excluding potential IDT refunds affected IRS’s estimates of the amount potentially paid to IDT fraudsters who were able to pass TPP authentication, we conducted a scenario analysis. We chose not to base our scenarios on IRS’s past research because it used a nongeneralizable sample and because its methodology for identifying close matches changed from 2014 to 2015. Instead, we identified scenarios of 5 to 10 percent to illustrate the potential outcomes if relatively small percentages of these returns were actually IDT. To assess IRS’s efforts to improve its Identity Theft Taxonomy (Taxonomy) estimates of IDT refund fraud for 2014, we reviewed the Taxonomy’s methodology and estimates. We then evaluated them against selected best practices in the GAO Cost Estimating and Assessment Guide (GAO Cost Guide) that were applicable to the Taxonomy and consistent with IRS and Office of Management and Budget (OMB) information quality guidelines. These best practices are relevant because the Taxonomy is an estimate of the amount of revenue lost to IDT refund fraud—a cost to taxpayers. To develop this guide, our cost experts assessed the measures consistently applied by cost- estimating organizations throughout the federal government and industry; based upon this assessment, the cost experts then considered best practices for the development of reliable cost estimates. We focused our analysis on those best practices that we assessed as “partially met” or less in our review of the 2013 Taxonomy (see text box). In comparing 2014 estimates with 2013 estimates, we could not determine if differences in Taxonomy estimates between these years were due to changes in methodology, IDT fraud trends, or the efficacy of IRS’s IDT defenses. During our review of the 2013 Taxonomy, we discussed the GAO Cost Guide’s best practices with IRS officials who generally agreed with their applicability to the Taxonomy. Best Practices in Cost Estimating Used to Review the 2014 Taxonomy We assessed the Taxonomy against the following best practices for objective, reliable cost estimates: Include all relevant costs. Document all cost-influencing ground rules and assumptions. Include a sensitivity analysis. Include a risk and uncertainty analysis. Are not overly conservative or optimistic, and are based on most likely costs. Provide evidence that the cost estimate was reviewed and accepted by management. To analyze IRS’s Taxonomy against the best practices, we reviewed Taxonomy documentation, conducted manual and electronic data testing, reviewed coding for obvious errors, compared underlying data to IRS’s Refund Fraud & Identity Theft Global Report, and interviewed IRS officials to understand the methodology used to create the 2014 estimates and how that methodology changed from that used to develop the 2013 Taxonomy. We did not replicate IRS’s Taxonomy estimates using tax return data; rather, our focus was on IRS’s methodology for calculating the estimates. We developed an overall assessment rating for each best practice using the following definitions: Not met. IRS provided no evidence that satisfied any portion of the best practice. Minimally met. IRS provided evidence that satisfied a small portion of the best practice. Partially met. IRS provided evidence that satisfied about half of the best practice. Substantially met. IRS provided evidence that satisfies a large portion of the best practice. Met. IRS provided complete evidence that satisfies the entire best practice. We conducted this performance audit from March 2015 to May 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This report is the third in a series of our reports on identity theft (IDT) refund fraud. Since August 2014, we have issued two reports that included eight recommendations on actions the Internal Revenue Service (IRS) can take to monitor and combat IDT refund fraud. As of March 2016, IRS implemented three of the eight recommendations, and it is implementing the remaining five recommendations. Table 3 summarizes our prior recommendations and their implementation status. Earlier access to W-2s could help IRS match W-2 information to taxpayers’ returns and identify discrepancies before issuing billions of dollars of fraudulent IDT refunds. How IRS implements W-2 matching could affect the costs and benefits for itself and other stakeholders (e.g., logistical challenges for the Social Security Administration, which processes W-2 data before transmitting them to IRS). Implemented. In September 2015, IRS provided us with a document detailing the costs and benefits of W-2 acceleration. The document discussed the IRS systems and work processes that will need to be adjusted to accommodate earlier, prerefund matching of W-2s; the time frames for when these changes could be made; potential impacts on taxpayers, IRS, and other parties; and what other changes will be needed (such as delaying refunds) to ensure IRS can match tax returns to W-2 data before issuing refunds. Implemented. See description above. fully assess the costs and benefits of accelerating W-2 deadlines and provide information to Congress on potential impacts on taxpayers, IRS, the Social Security Administration, and third parties. fully assess the costs and benefits of accelerating W-2 deadlines and provide information to Congress on what other changes will be needed (such as delaying the start of the filing season or delaying refunds) to ensure IRS can match tax returns to W-2 data before issuing refunds. See description of benefits above. See description of benefits above. Implemented. See description above. provide aggregated information on (1) the success of external party leads in identifying suspicious returns and (2) emerging trends (pursuant to section 6103 restrictions). This feedback would help financial institutions know if the leads they provide to IRS are useful and would help them improve their own detection tools. develop a set of metrics to track external leads by the submitting third party. Implementation in progress. In November 2014, IRS reported that it would implement our recommendation by November 2015. In November 2015, IRS reported that it had developed a database to track leads submitted by financial institutions and the results of those leads. IRS also stated that it had held two sessions with financial institutions to provide feedback on external leads provided to IRS. In December 2015, IRS officials stated that the agency had sent a customer satisfaction survey asking financial institutions for feedback on the external leads process and was considering other ways to provide feedback to financial institutions. However, to date IRS has not provided feedback to the majority of relevant lead-generating third parties. Implementation in progress. See description above. This feedback would help financial institutions know if the leads they provide to IRS are useful and would help them improve their own detection tools. GAO-15-119 – We recommended that the Commissioner of Internal Revenue should follow relevant best practices outlined in GAO’s Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs (GAO Cost Guide) by documenting the underlying analysis justifying cost-influencing assumptions. Given the evolving nature of IDT refund fraud, documenting the rationales for assumptions would help IRS management and policymakers determine whether the assumptions remain valid or need to be updated. Implementation in progress. In April 2015, IRS reported that it would implement our recommendation by mid- October 2016. In October 2015, IRS provided updated Taxonomy estimates for 2014. This new analysis and documentation noted most but not all assumptions. For example, it did not note that some returns resulting in paid refunds were excluded because they were outside thresholds. In addition, the rationales supporting some assumptions, such as the estimated refund values associated with e-file reject returns, were not documented. follow relevant best practices outlined in the GAO Cost Guide by reporting the inherent imprecision and uncertainty of the estimates. For example, IRS could provide a range of values for its Taxonomy estimates. Reporting the uncertainty that is already known from IRS analysis (and conducting further analyses when not cost prohibitive) might help IRS communicate IDT refund fraud's inherent complexity. While a point estimate might lead to one decision, a range that reflects the uncertainty may lead decision makers to a different decision. should estimate and document the costs, benefits and risks of possible options for taxpayer authentication, in accordance with Office of Management and Budget and National Institute of Standards and Technology guidance. Analysis of costs, benefits, and risks could help inform IRS’s and Congress’s decisions about whether and how much to invest in the various authentication options. Implementation in progress. In April 2015, IRS reported that it would implement this recommendation by mid- October 2016. In September 2015, IRS provided updated Taxonomy estimates for 2014 that presented the estimates for refunds paid and not recovered as ranges. While these ranges account for risk surrounding known IDT returns that were paid to actual fraudsters, these ranges do not take into account the cumulative impact of additional assumptions on the estimate. For example, IRS’s analysis does not account for the impact of how IRS defines the population of likely IDT returns. IRS should conduct additional analyses to understand the estimates’ uncertainty and report the imprecision and uncertainty of the estimates. Specifically, sensitivity analysis could help IRS understand how each assumption affects the estimates. A risk and uncertainty analysis could help IRS understand the cumulative impact of all assumptions on the Taxonomy estimates. Implementation in progress. In April 2015, IRS reported that it would implement our recommendation by November 2015. In late 2015, IRS officials told us that the agency has developed guidance for the authentication group to assess costs, benefits, and risks, and that its analysis will inform decision making on authentication-related issues.While IRS is making progress, it has yet to analyze the costs, benefits, and risks of the range of authentication options available and has not used analysis to select which authentication options to use for specific types of taxpayer interactions. We continue to monitor IRS’s progress. In addition to these eight recommendations, we also identified a matter for congressional consideration to help IRS combat IDT refund fraud. In August 2014, we reported that Congress should consider providing the Secretary of the Treasury with the regulatory authority to lower the threshold for electronic filing of the W-2, from 250 returns annually to between 5 to 10 returns, as appropriate. As discussed in table 3 above, earlier access to W-2s could help IRS match W-2 information to taxpayers’ returns and identify discrepancies before issuing billions of dollars of fraudulent IDT refunds. However, paper W-2s are unavailable for IRS matching until later in the year due to the additional time needed to process paper forms. The Social Security Administration estimated that to meaningfully increase the electronic filing (e-filing) of W-2s, the threshold would have to be lowered to include those filing 5 to 10 W-2s. In addition, the Social Security Administration estimated an administrative cost savings of about 50 cents per e-filed W-2. Based on these cost savings and the ancillary benefits they provide in supporting IRS’s efforts to conduct more prerefund matching, a change in the e-filing threshold is warranted. As of March 2016, Congress has not acted on this matter for consideration. James R. McTigue, Jr., (202) 512-9110 or mctiguej@gov. In addition to the individual named above, Neil Pinney, Assistant Director; Shannon Finnegan, Analyst-in-Charge; Lisette Baylor; Dawn Bidne; Amy Bowser; Sara Daleski; Deirdre Duffy; Michele Fejfar; Lauren Friedman; Robert Gebhart; Jason Lee; Dae Park; Jeffrey Daniel Paulk; Robert Robinson; and Albert Sim made key contributions to this report. Joanna Berry, Gary Bianchi, Mark Canter, Nina Crocker, Jeffrey Knott, Paul Middleton, Sabine Paul, Sara Pelton, Bradley Roach, and Julie Spetz also provided assistance. 2016 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-16-375SP. Washington, D.C.: April 13, 2016. Information Security: IRS Needs to Further Improve Controls over Financial and Taxpayer Data. GAO-16-398. Washington, D.C.: March 28, 2016. Financial Audit: IRS’s Fiscal Years 2015 and 2014 Financial Statements. GAO-16-146. Washington, D.C.: November 12, 2015. Information Security: IRS Needs to Continue Improving Controls over Financial and Taxpayer Data. GAO-15-337. Washington, D.C.: March 19, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Identity Theft and Tax Fraud: Enhanced Authentication Could Combat Refund Fraud, but IRS Lacks an Estimate of Costs, Benefits and Risks. GAO-15-119. Washington, D.C.: January 20, 2015. Identity Theft: Additional Actions Could Help IRS Combat the Large, Evolving Threat of Refund Fraud. GAO-14-633. Washington, D.C.: August 20, 2014. Financial Audit: IRS’s Fiscal Years 2013 and 2012 Financial Statements. GAO-14-169. Washington, D.C.: December 12, 2013. Internal Revenue Service: 2013 Tax Filing Season Performance to Date and Budget Data. GAO-13-541R. Washington, D.C.: April 15, 2013. Identity Theft: Total Extent of Refund Fraud Using Stolen Identities is Unknown. GAO-13-132T. Washington, D.C.: November 29, 2012. Financial Audit: IRS’s Fiscal Years 2012 and 2011 Financial Statements. GAO-13-120. Washington, D.C.: November 9, 2012. Taxes and Identity Theft: Status of IRS Initiatives to Help Victimized Taxpayers. GAO-11-721T. Washington, D.C.: June 2, 2011. Taxes and Identity Theft: Status of IRS Initiatives to Help Victimized Taxpayers. GAO-11-674T. Washington, D.C.: May 25, 2011. Tax Administration: IRS Has Implemented Initiatives to Prevent, Detect, and Resolve Identity Theft-Related Problems, but Needs to Assess Their Effectiveness. GAO-09-882. Washington, D.C.: September 8, 2009.
|
IRS estimates that, in 2014, it prevented or recovered $22.5 billion in attempted IDT refund fraud, but paid $3.1 billion in fraudulent IDT refunds. Because of the difficulties in knowing the amount of undetected fraud, the actual amount could differ from these point estimates. IDT refund fraud occurs when a refund-seeking fraudster obtains an individual's identifying information and uses it to file a fraudulent tax return. Despite IRS's efforts to identify and prevent IDT refund fraud, this crime is an evolving and costly problem. GAO was asked to examine IRS's efforts to combat IDT refund fraud. This report (1) evaluates the performance of IRS's TPP and (2) assesses IRS's efforts to improve its estimates of IDT refund fraud costs for 2014. To evaluate TPP, GAO reviewed IRS studies, reviewed relevant guidance, and met with agency officials. Further, GAO conducted a scenario analysis to understand the effect of different assumptions on IRS's TPP analysis. To assess IRS's IDT cost estimates, GAO evaluated IRS's methodology against selected best practices in the GAO Cost Guide . Taxpayer Protection Program (TPP) . While the Internal Revenue Service (IRS) has made efforts to strengthen TPP—a program to authenticate the identities of suspicious tax return filers and prevent identity theft (IDT) refund fraud—fraudsters are still able to pass through and obtain fraudulent refunds. TPP authenticates taxpayers by asking questions only a real taxpayer should know; however, fraudsters can pass by obtaining a taxpayer's personally identifiable information (PII). IRS estimates that of the 1.6 million returns selected for TPP, it potentially paid $30 million to IDT fraudsters who filed about 7,200 returns that passed TPP authentication in the 2015 filing season; however, GAO's analysis suggests the amount paid was likely to be higher. Although IRS conducted a risk assessment for TPP in 2012, IRS has not conducted an updated risk assessment that reflects the current threat of IDT refund fraud—specifically, the threat that some fraudsters possess the PII needed to pass authentication questions. Federal e-authentication guidance requires agencies to assess risks to programs. An updated risk assessment would help IRS identify opportunities to strengthen TPP. Strengthened authentication would help IRS prevent revenue loss and reduce the number of legitimate taxpayers who become fraud victims. IDT Refund Fraud Cost Estimates . In response to past GAO recommendations, IRS adopted a new methodology in an effort to improve its 2014 IDT refund fraud cost estimates. However, the estimates do not include returns that fail to meet specific refund thresholds. IRS officials said the thresholds allow them to prioritize IRS's enforcement efforts. However, using thresholds could result in incomplete estimates. Improved estimates would help IRS better understand how fraud is evading agency defenses. The GAO Cost Guide states that cost estimates should include all relevant costs. Additionally, IRS's estimates of refunds it protected from fraud are based on the Global Report , which counts each time a fraudulent return is caught by IRS and thus counts some returns multiple times. IRS uses this data source because it is IRS's official record of IDT refund fraud. The GAO Cost Guide states that agencies should use primary data for estimates and the data should contain few mistakes. By using the Global Report , as opposed to return-level data, IRS produces inaccurate estimates of IDT refund fraud, which could impede IRS and congressional efforts to monitor and combat this evolving threat. GAO recommends that IRS update its TPP risk assessment and take appropriate actions to mitigate risks identified in the assessment. GAO also recommends that IRS improve its IDT cost estimates by removing refund thresholds and using return-level data where available. IRS agreed with GAO's TPP recommendations and will update its risk assessment. IRS took action consistent with GAO's IDT cost estimate recommendations.
|
The nation’s small community airports, while large in number, serve only a small portion of the nation’s air travelers and face issues very different from those of larger airports. Airports that are served by commercial airlines in the United States are categorized into four main groups based on the annual number of passenger enplanements—large hubs, medium hubs, small hubs and nonhubs. In 2001, the 31 large hub airports and 36 medium hub airports (representing about 13 percent of commercial service airports) enplaned the vast majority—89 percent—of the more than 660 million U.S. passengers. In contrast, those normally defined as small community airports —the 69 small hub airports and 400 nonhub airports—enplaned about 8 percent and 3 percent of U.S. passengers, respectively. There are significant differences in both the relative size and type of service among these communities, as shown in Figure 1. Officials from small communities served by small hub and nonhub airports reported that limited air service is a long-standing problem. This problem has been exacerbated by the economic downturn and events of September 11. Fundamental economic principles help explain the situation small communities face. Essentially, these communities have a smaller population base from which to draw passengers, which in turn means they have limited potential to generate a profit for the airlines. Relatively limited passenger demand, coupled with the fact that air service is an inherently expensive service to provide, make it difficult for many such communities to attract and keep air service. The recent economic downturn and events of September 11 dealt a severe financial blow to many major airlines, and the results of these losses can be felt in even the smallest communities. United Airlines and US Airways are in bankruptcy proceedings, and one Wall Street analyst is projecting industry losses of $6.5 billion for 2003, the third straight year of multi- billion dollar losses. While major airlines often do not serve small communities directly, many have agreements with smaller regional airlines to provide air service to small communities. This provides feeder traffic into the larger network. Consequently, financial problems for major airlines and their resulting cost-cutting efforts may ultimately affect the air service a small community receives. Complicating the financial situation for both major and regional airlines is the growing presence of low-fare airlines, such as Southwest Airlines. Low-fare airlines’ business model of serving major markets, not small communities, has helped these airlines better weather the economic downturn. Airport officials have reported that these airlines’ low fares attract passengers from a large geographic area, and many small airports face significant “leakage” of potential local passengers to airports served by low-fare airlines. In a March 2002 report, we found that almost half of the nonhub airports studied were within 100 miles of a major airline hub or an airport served by a low-fare airline, as illustrated in Figure 2. Further, over half of the 207 small community airport officials we surveyed said they believed local residents drove to another airport for airline service to a great or very great extent. Eighty-one percent of them attributed the leakage to the availability of lower fares from a major airline at the alternative airport. Local, state, and federal governments all play roles in developing and maintaining air service for small communities. Air service is a local issue because commercial airports in the United States are publicly-owned facilities, serving both local and regional economies. Many state and local governments provide funding and other assistance to help communities develop or maintain local air service. The federal government has assisted in developing air service both through the EAS program, which subsidizes air service to eligible communities and the Pilot Program, which provided grants to foster effective approaches to improving air service to small communities. The assumption underlying these efforts is that connecting small communities to the national air transportation system is both fundamental for local economic vitality and is in the national interest. The Administration’s budget proposal for fiscal year 2004 substantially reduces funding for small community air service. The budget would reduce EAS funding from $113 million in 2003 to $50 million in 2004 and would change the program’s structure by altering eligibility criteria and requiring nonfederal matching funds. The 2004 budget proposal does not include funds for the Pilot Program. Our recent review of nearly 100 small community air service improvement efforts undertaken by states, local governments, or airports showed that communities attempted three main categories of efforts (see Table 1): studies, like those used by communities in Texas and New Mexico, to determine the potential demand for new or enhanced air service; marketing, like Paducah, Kentucky’s, “Buy Local, Fly Local” advertising campaign, used to educate the public about the air service available or Olympia, Washington’s, presentations to airlines to inform them about the potential for new or expanded service opportunities; and financial incentives, such as the “travel bank” program implemented by Eugene, Oregon, in which local businesses pledged future travel funds to encourage an airline to provide new or additional service. Studies by themselves have no direct effect on the demand for or supply of air service, but they can help communities determine if there is adequate potential passenger demand to support new or improved air service. Marketing can have a more direct effect on demand for air service if it convinces passengers to use the local air service rather than driving or flying from another airport. While the specific effect is difficult to ascertain, an airport official from Shenandoah Valley, Virginia, pointed out that his airport’s annual enplanements more than doubled—from 8,000 to 20,000—after a marketing and public relations campaign. Marketing the airport to airlines may also have a direct effect on the supply of air service if the efforts succeed in attracting new airlines or more service from existing airlines. Financial incentives most directly affected the level of air service provided in the communities we studied. Financial incentives mitigate some of the airline’s risk by providing some assurance about the financial viability of the service. The incentives take a number of different forms, as shown in Table 2. Some programs provided subsidies to airlines willing to supply service. Some provided revenue guarantees, under which the community and airline established revenue targets and the airline received payments only if actual revenues did not meet targets. Financial incentives can attract new or enhanced air service to a community, but incentives do not guarantee that the service will be sustained when the incentives end. We studied the efforts of 12 communities in detail, all but one of which used a financial incentive program. Of these, five had completed their program but only Eugene, Oregon, was able to sustain the new service after the incentive program ended. At the other four—all nonhub airports smaller than Eugene—the airline ceased service when the incentives ended. However, while a community’s size is important, it is largely beyond a community’s control. We identified two other factors, more directly within a community’s control, that were also important for success. The first, the presence of a catalyst for change, was particularly important in getting the program started. The catalyst was normally state, community, or airport officials who recognized the air service deficiencies and began a program for change. More important to the long-term sustainability, however, was a community consensus that air service is a priority. This second factor involves recognizing that enhanced air service is likely to come at a price and developing a way in which the community agrees to participate. At many of the communities we studied, there was not a clear demonstration of community commitment to air service. The two major federal efforts to help small communities attract or retain air service are the EAS program and the Pilot Program. The Congress established EAS as part of the Airline Deregulation Act of 1978, due to concern that air service to some small communities would suffer in a deregulated environment. The act guaranteed that communities served by airlines before deregulation would continue to receive a certain level of scheduled air service. If an airline cannot provide service to an eligible community without incurring a loss, then the Department of Transportation (DOT) can use EAS funds to award that airline, or another airline willing to provide service, a subsidy. Funding for EAS was $113 million for fiscal years 2002 and 2003. The other major program, the Pilot Program, was authorized as part of the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century (AIR-21). The Pilot Program’s mission is to assist communities in developing projects to enhance their access to the national air transportation system. The Pilot Program differs from EAS because communities, not airlines, receive the funds and the communities develop the program that they believe will best address their air service needs. The Congress appropriated $20 million in both fiscal years 2002 and 2003 for this effort. The EAS program costs have increased dramatically since 1995, but the actual number of passengers using EAS-subsidized air service has dropped. Total program funding increased from $37 million in 1995 to $113 million in 2002 (2002 constant dollars). Further, during this period of time, the subsidy per community nearly doubled, from almost $424,000 to over $828,000. However, the total passenger enplanements at EAS-subsidized communities decreased about 20 percent (between 1995 and 2000) falling from 592,000 to 477,000. As a result, the per passenger subsidy (for continental U.S. communities) increased from $79 to an estimated $229 in 2002, a nearly 200-percent increase. Table 3 provides more information. Two key factors will likely continue to increase EAS program costs in the future. First, more communities may require subsidized service. As of February 2003, the EAS program served 125 communities, up from the 114 served only 7 months earlier. Of these, 88 are in the continental United States and 37 are in Alaska, Hawaii, and Puerto Rico. According to DOT officials, more small communities will likely lose unsubsidized commercial service in the future—especially those served by one airline. Some of these communities could be eligible to receive an EAS subsidy. In October 2001, there were 98 small communities being served by one carrier. Of the 98, 25 have smaller populations and lower levels of employment than the typical EAS-subsidized community, 21 have lower levels of income per capita, and 35 have lower levels of manufacturing earnings. Second, EAS-subsidized communities tend to generate limited passenger revenue because surrounding populations are small and the few travelers generated in each community tend to drive to their destinations or fly from other, larger airports for lower airfares and improved service options. EAS community airports may serve less than 10 percent of the local passenger traffic; over half of the subsidized communities in the continental U.S. are within 125 miles of a larger airport. This low demand and “passenger leakage” to other airports depress the revenue carriers can make from EAS routes, making the program less attractive to airlines and increasing subsidy costs. There are clear questions about the EAS program’s effectiveness. In a recent report on the EAS program, we outlined a number of options that the Congress could consider to enhance the long-term viability of the program. For example, one option was to target subsidized service to more remote communities with fewer other transportation options. Another option was to restructure or replace subsidies to airlines with local grants. This could enable communities to better match their transportation needs with locally available options. Some of the options discussed in our report were incorporated in the Administration’s fiscal year 2004 budget proposal. In its first year of operation, small communities demonstrated an extraordinary demand for air service development funds. DOT received 180 applications requesting over $142.5 million—more than seven times the funds available—from communities in 47 states. By December 2002, DOT had awarded nearly $20 million in grants to 40 small communities (or consortia of communities). The grants ranged in amount from $44,000 to over $1.5 million. Some of the grants are being used for such innovative ideas as the following: Mobile, Alabama, a small hub, received a grant of $457,000 to continue providing ground handling service for one of its airlines. While this is a common practice in Europe, a Mobile official told us that he is only aware of one other airport in the United States that provides these services for an airline. Baker City, Oregon, received a grant of $300,000 to invest in an air taxi franchise. Baker City has a small population and is in a fairly remote part of Oregon that does not have scheduled airline service. The community decided to pursue an alternative to scheduled service and purchased an air taxi franchise from SkyTaxi, a company that provides on-demand air service. Casper, Wyoming, received a grant of $500,000 to purchase and lease back an aircraft to an airline to ensure that the airline serves the community. It is fairly unusual for a community to approach air service development by purchasing an aircraft to help defray some of the airline’s costs and mitigate some of the airline’s risk in providing the service. However, the majority of these grants funded the same types of projects discussed earlier—studies of a community’s potential market, marketing activities to stimulate demand for service or to lure an airline, and financial incentives such as subsidies to airlines for providing service. If these communities experience the same results as the other state and local efforts we identified, their efforts are unlikely to attract new or enhanced service for the small communities using them, or if they do, the service will only last as long as these funds are available. Since final grant agreements were signed in December 2002, it is too early to determine how effective the various types of initiatives might prove to be. Additionally, some of the funded projects contain multiple components and some are scheduled to be implemented over several years. Therefore, it might be some time before DOT is able to evaluate the initial group of projects to determine which have been effective in initiating or enhancing small community air service over the long-term. As air service to small communities becomes increasingly limited and as the national economy continues to struggle, questions about the efficacy of those programs highlight issues regarding the type and extent of federal assistance for small community air service. The EAS program appears to be meeting its statutory objectives of ensuring air service to eligible communities, yet the program clearly has not provided an effective transportation solution for most travelers to or from those communities. Subsidies paid directly to carriers support limited air service, but not the quality of service that passengers desire, and not at fares that attract local passenger traffic. As a result, relatively few people who travel to or from some of these communities use the federally-subsidized air service. Many travelers’ decisions to use alternatives—whether another larger airport or simply the highway system—are economically and financially rational. Several factors—including increasing carrier costs, limited passenger revenue, and increasing number of eligible communities requiring subsidized service—are likely to affect future demands on the EAS program. The number of communities that are eligible for EAS-subsidized service is likely to increase in the near term, creating a subsidy burden that could exceed current appropriations. Should the EAS program be fully funded so that no eligible community loses its direct connection to the national air transportation network? Should the EAS program be fundamentally changed in an attempt to create a more effective transportation option for travelers? In August 2002, we identified various options to revise the program to enhance its long-term viability, along with some of the associated potential effect. The Pilot Program also appears to have met its statutory objective of extending federal assistance to 40 nonhub and small hub communities to assist communities in developing projects to enhance their access to the national air transportation system. Yet whether any of the projects funded will prove to be effective at developing sustainable air service is uncertain. Relatively few communities offered innovative approaches to developing or enhancing air service. Most of the initiatives that received federal grants resembled other state or local efforts that we had already identified. Evidence from those efforts indicated that some communities could develop sustainable air service—but likely only small hub communities that have a relatively large population and economic base. Among smaller, nonhub communities, direct financial assistance to carriers was most effective at attracting air service, but only as long as the financing existed. If the Pilot Program is extended, will it essentially become another subsidy program? Reauthorization provides an opportunity for the Congress to clarify the federal strategy for assisting small communities with commercial air service. We believe that there may be a number of questions that need to be addressed, including the following: What amount of assistance would be needed to maintain the current federal commitment to both small hub and nonhub airports? Would federal assistance be better targeted at nonhub or small hub communities, but not both? Rather than providing subsidies directly to carriers, should federal assistance be directed to states or local communities to allow them to determine the most effective local strategy? What role should state and local governments play in helping small communities secure air service? Mr. Chairman and members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee might have. For further information on this testimony, please contact JayEtta Hecker at (202) 512-2834. Individuals making key contributions to this testimony included Janet Frisch, Steve Martin, Stan Stenersen, and Pamela Vines. Commercial Aviation: Factors Affecting Efforts to Improve Air Service at Small Community Airports. GAO-03-330. Washington, D.C.: January 17, 2003. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Options to Enhance the Long-term Viability of the Essential Air Service Program. GAO-02-997R. Washington, D.C.: August 30, 2002. Commercial Aviation: Air Service Trends at Small Communities Since October 2000. GAO-02-432. Washington, D.C.: March 29, 2002. “State of the U.S. Commercial Airlines Industry and Possible Issues for Congressional Consideration”, Speech by Comptroller General of the United States David Walker. The International Aviation Club of Washington: November 28, 2001. Financial Management: Assessment of the Airline Industry’s Estimated Losses Arising From the Events of September 11. GAO-02-133R. Washington, D.C.: October 5, 2001. Commercial Aviation: A Framework for Considering Federal Financial Assistance. GAO-01-1163T. Washington, D.C.: September 20, 2001. Aviation Competition: Restricting Airline Ticketing Rules Unlikely to Help Consumers. GAO-01-832. Washington, D.C.: July 31, 2001. Aviation Competition: Challenges in Enhancing Competition in Dominated Markets. GAO-01-518T. Washington, D.C.: March 13, 2001. Aviation Competition: Regional Jet Service Yet to Reach Many Small Communities. GAO-01-344. Washington, D.C.: February 14, 2001. Airline Competition: Issues Raised by Consolidation Proposals. GAO-01- 402T. Washington, D.C.: February 7, 2001. Aviation Competition: Issues Related to the Proposed United Airlines-US Airways Merger. GAO-01-212. Washington, D.C.: December 15, 2000. Essential Air Service: Changes in Subsidy Levels, Air Carrier Costs, and Passenger Traffic. GAO/RCED-00-34. Washington, D.C.: April 14, 2000.
|
Small communities have long faced challenges in obtaining or retaining the commercial air service they desire. These challenges are increasing as many U.S. airlines try to stem unprecedented financial losses through numerous cost-cutting measures, including reducing or eliminating service in some markets, often small communities. Congress will be considering whether to reauthorize its federal assistance programs for small communities. GAO was asked to describe the kinds of efforts that states and local communities have taken to enhance air service at small communities; federal programs for enhancing air service to small communities; and issues regarding the type and extent of federal assistance to enhance air service to small communities. Small communities have taken a variety of steps to try to obtain or improve air service, such as marketing to increase passengers' demand for local service or offering financial incentives to airlines to attract new or enhanced service. At communities GAO studied in depth, financial incentives were most effective in attracting new service. However, the additional service often ceased when incentives ended. The two key federal programs to help small communities with air service face increasing budgetary pressures and questions about their effectiveness. Demand for these programs is heavy and may increase as airlines reduce service to communities. The Essential Air Service program subsidizes carriers that provide air service to eligible small communities. However, program costs have tripled since 1995, and fewer passengers use the subsidized local service. Most choose to drive to their destination or to fly to and from another nearby airport with more service or lower fares. The Small Community Air Service Development Pilot Program, in its first year of operation, provided $20 million in grants to help small communities enhance service. Most programs funded appear similar to those undertaken by communities and may not result in sustainable service enhancements. Questions about the efficacy of these programs highlight issues regarding the type and extent of federal assistance for small community air service. Reauthorization provides an opportunity for the Congress to clarify the federal strategy for assisting small communities with air service.
|
Pursuant to Homeland Security Presidential Directive 6, TSC was established to create and maintain the U.S. government’s consolidated watchlist—the Terrorist Screening Database (TSDB)—and to provide for the use of watchlist records during security-related and other screening processes. The watchlisting and screening processes are intended to support the U.S. government’s efforts to combat terrorism by consolidating the terrorist watchlist and providing screening and law enforcement agencies with information to help them respond appropriately during encounters with known or suspected terrorists, among other things. TSC receives watchlist information for inclusion in the TSDB from two sources: NCTC and the FBI. TSC receives the vast majority of its watchlist information from NCTC, which compiles information on known or suspected international terrorists. NCTC receives this information from executive branch departments and agencies—such as the Central Intelligence Agency (CIA), State, and the FBI—and maintains the information in its Terrorist Identities Datamart Environment (TIDE) database. Agencies that submit nominations to NCTC are to include pertinent derogatory information and any biographic information—such as name and date of birth—needed to establish the identity of individuals on the watchlist. The FBI provides TSC with information about known or suspected domestic terrorists.who are subjects of ongoing FBI counterterrorism investigations to TSC for inclusion in the TSDB, including persons the FBI is preliminarily investigating to determine if they have links to terrorism. In general, the FBI nominates individuals In accordance with Homeland Security Presidential Directive 6—and built upon through Homeland Security Presidential Directives 11 and 24—the TSDB is to contain information about individuals known or suspected to be or have been engaged in conduct constituting, in preparation for, in aid Nominating agencies, of, or related to terrorism and terrorist activities. NCTC, and the FBI apply a reasonable-suspicion standard to determine which individuals are appropriate for inclusion in the TSDB. NCTC and the FBI are to consider information from all available sources to determine if there is a reasonable suspicion of links to terrorism that warrants a nomination. Once NCTC and the FBI determine that an individual meets the reasonable-suspicion standard and that minimum biographic information exists, they extract sensitive but unclassified information on the individual’s identity—such as name and date of birth— from their classified databases and send the information to TSC. TSC reviews these nominations—evaluating the derogatory and biographic information—to decide whether to add nominated individuals to the TSDB. Appendix II contains additional information on the watchlist nominations process. To support agency screening processes, TSC sends applicable records from the TSDB to screening and law enforcement agency systems based on the agency’s mission responsibilities and other factors. For instance, applicable TSC records are provided to TSA for use in screening airline passengers, to CBP for use in inspecting and vetting persons traveling to and from the United States, and to State for use in screening visa applicants. Regarding individuals who are not citizens or nationals of the United States seeking to travel to and lawfully enter the United States, screening and law enforcement agencies rely on immigration laws that specify criteria for determining whether to issue visas to individuals and whether to admit them into the country. In many instances, individuals who are not citizens or nationals of the United States who have engaged in or are likely to engage in terrorist-related activities may be ineligible to receive visas or inadmissible for entry to the United States, or both. U.S. citizens returning to the United States from abroad are not subject to the admissibility requirements of the Immigration and Nationality Act, regardless of whether they are subjects of watchlist records. In general, these individuals only need to establish their U.S. citizenship to the satisfaction of the examining officer—by, for example, presenting a U.S. passport—to obtain entry into the United States. U.S. citizens are subjected to inspection by CBP before being permitted to enter and additional actions may be taken, as appropriate. On December 25, 2009, Umar Farouk Abdulmutallab, a 23-year old Nigerian man, attempted to detonate a concealed explosive device on Northwest Airlines Flight 253 en route from Amsterdam to Detroit as the plane descended into the Detroit Metropolitan Wayne County Airport. According to the Executive Office of the President’s and Senate Select Committee on Intelligence’s inquiries into events that led to the attempted attack, failures across the intelligence community—including human errors, technical problems, and analytic misjudgments—contributed to the government’s failure to identify the subject as a threat that would qualify him for inclusion on the terrorist watchlist. The inquiries concluded that the intelligence community held information on Mr. Abdulmutallab—he was included in TIDE at the time of the attempted attack—but that it was fragmentary and ultimately not pieced together to form a coherent picture of the threat he posed (see fig. 1). The government inquiries also raised issues regarding how agencies used and interpreted the 2009 watchlisting protocol for nominating individuals to the watchlist. For example, according to the Executive Office of the President’s review, although Mr. Abdulmutallab was entered into TIDE in November 2009, NCTC determined that the associated derogatory information did not meet the criteria for nominating him to the terrorist watchlist. Therefore, NCTC did not send the nomination to TSC. Also, according to the Senate Select Committee on Intelligence report, agencies may have interpreted the 2009 watchlisting protocol’s standards for placing individuals on the watchlist too rigidly, thereby preventing Mr. Abdulmutallab from being nominated for inclusion on the watchlist. Under the auspices of the Information Sharing and Access Interagency Policy Committee, TSC—in coordination with watchlisting and screening agencies—reviewed the 2009 watchlisting protocol and made recommendations regarding whether adjustments to the protocol were warranted. The Deputies Committee—a senior interagency forum that considers policy issues affecting national security—initially approved new watchlisting guidance for issuance to the watchlisting and screening communities in May 2010. After a multiagency classification review was completed, the Deputies Committee approved a final version of the Watchlisting Guidance in July 2010, which TSC issued to the watchlisting and screening communities. The July 2010 Watchlisting Guidance includes changes that were intended to address weaknesses in the nominations process that were exposed by the December 2009 attempted attack and to clarify how agencies are to nominate individuals to the watchlist. Since the guidance was approved, nominating agencies have expressed concerns about the increasing volumes of information and related challenges in processing this information and noted that the long-term impacts of the revisions may not be known for some time. For example, the watchlisting unit director from one agency reported that the agency is experiencing an increasing intake of information from its sources, which has impacted its analysts’ reviews of this information. Also, officials from some agencies reported that at times they have had to temporarily add personnel to review and process the large volumes of information. Data from the nominating agencies we contacted show that the agencies sent more nominations-related information to NCTC after the attempted attack than before the attack. According to NCTC officials, the center experienced receiving an increase in nominations beginning in February 2010. The officials noted that in May 2010, the volume of incoming nominations exceeded NCTC’s ability to process it, resulting in a backlog. NCTC has applied additional resources—both staffing and technological—to address its backlog. As a result, in October 2011, NCTC officials noted that the center had virtually eliminated its backlog. Moreover, unless TSC has the ability to process the information it receives, it cannot add information to the TSDB for use by screening and law enforcement agencies. Overall, the volume of nominations TSC is receiving from the FBI and NCTC has generally increased since the attempted attack. According to TSC officials, the center has avoided backlogs by employing a variety of strategies to address its workload, including management of personnel resources and use of more advanced technology. Since the December 25, 2009, attempted attack, agencies involved in the watchlist nominations process have pursued staffing, technology, working groups, and other solutions to strengthen the process and manage increasing volumes of information. Specifically, officials from four of the seven agencies we contacted reported that they are in the process of developing and implementing certain technological solutions to address watchlisting issues. For example, NCTC, in consultation with other members of the intelligence community, reported that it is developing information technology tools to strengthen analysts’ abilities to identify potential links to terrorism. The government has also created interagency working groups to address watchlist-related issues. Further, NCTC reported that training programs have been developed and administered to its watchlisting analysts, as well as nominating and screening agency personnel. Our review of the July 2010 Watchlisting Guidance and discussions with relevant agency officials indicated that in drafting the guidance, the watchlist community emphasized quality-assurance mechanisms as well as civil rights and civil liberties protections that should be considered when nominating individuals. While agencies are pursuing actions to strengthen the watchlisting process, no single entity is accountable for routinely assessing the overall impacts the July 2010 Watchlisting Guidance is having on the watchlisting community, the extent to which these impacts are acceptable and manageable from a policy perspective, and if the impacts indicate the need for any adjustments. Further, no entity is routinely collecting and analyzing data needed to conduct such governmentwide assessments over time. In general, officials from the nominating agencies we contacted and from NCTC and TSC said that they participated in developing the July 2010 Watchlisting Guidance and agreed with the changes, but noted that they did not know at the time how changes implemented through the 2010 guidance would impact them. Routinely assessing these impacts could help agencies address any challenges they are having in implementing the watchlisting guidance. Agencies involved in the nominations process are taking actions to address challenges related to implementing the 2010 guidance. For example, officials from the Information Sharing and Access Interagency Policy Committee’s Subcommittee on Watchlisting noted that departments and agencies within the watchlisting community are responsible for assessing the impacts of their individual watchlisting efforts and for bringing issues, as needed, to the subcommittee. They explained that agencies react to and address issues and challenges as they arise. However, this approach has not allowed them to proactively and systematically assess the watchlisting process and identify emerging issues; achieve consensus on solutions to potential challenges before they manifest themselves; and determine if adjustments to the watchlisting guidance are needed. Because of the collaborative nature of the watchlisting process, any assessment of impacts must be an interagency effort. However, none of the interagency entities we contacted were routinely performing these assessment functions. In February 2011, officials from the Subcommittee on Watchlisting noted that the subcommittee was preparing a report on watchlisting efforts since the December 2009 attempted attack and had requested that subcommittee members provide input. At that time, the subcommittee officials noted that the Information Sharing and Access Interagency Policy Committee did not plan to conduct routine assessments of the watchlisting processes. In August 2011, a representative of the National Security Staff informed us that the Information Sharing and Access Interagency Policy Committee recently began performing an assessment function related to the July 2010 Watchlisting Guidance. The representative noted that the depth and frequency of specific reviews will vary as necessary and appropriate. The staff did not provide us details on these efforts, so we could not determine to what extent the assessments will be routine or involve collecting and analyzing data needed to conduct such assessments over time. Since we found no single entity that is responsible and accountable for routinely assessing the overall impacts the 2010 guidance is having on the watchlisting community—and collecting the data needed to conduct such assessments—the Assistant to the President for Homeland Security and Counterterrorism may be best positioned to ensure that governmentwide assessments are conducted. The President tasked this individual to be responsible and accountable for ensuring that agencies carry out actions to strengthen the watchlisting process after the December 2009 attempted attack. Thus, it likewise follows that this individual could be responsible and accountable for ensuring that the impacts from these actions are routinely assessed and that the results of the assessments are used to inform future watchlisting changes. According to Standards for Internal Control in the Federal Government, ongoing monitoring of programs and activities should occur during the course of normal operations.appropriate agencies routinely evaluate or assess the impact of the 2010 guidance on the watchlisting community could help decision makers determine if the guidance is achieving its intended outcomes or needs any adjustments, and help inform future efforts to strengthen the watchlisting process. Such assessments could also help the Information Sharing and Access Interagency Policy Committee and the watchlisting community understand longer-term impacts of changes to the watchlisting guidance, such as how increasing volumes of information are creating resource demands. Finally, such assessments could help to improve transparency and provide an accurate accounting to the Executive Office of the President and other stakeholders, including Congress, for the resources invested in the watchlisting process. Immediately after the December 2009 attempted attack, federal agencies took steps that resulted in an increase in the number of individuals in the TSDB and its aviation-related subsets—the No Fly and Selectee lists— based on new intelligence and threat information. Specifically, in the months following the attempted attack, agencies added these individuals to the TSDB from TIDE or from the TSDB to the No Fly or Selectee lists. Also, upon completion of this initiative, the number of U.S. persons on the No Fly List more than doubled and the number of U.S. persons on the Selectee List increased by about 10 percent. According to TSC data, the number of individuals on the No Fly List generally continued to increase during the remainder of 2010, while the number of individuals on the Selectee List remained relatively constant. To carry out these upgrades, TSC and NCTC—at the direction of the Deputies Committee and in consultation with other intelligence agencies—reviewed available intelligence and threat information that existed on certain individuals. At the same time, TSC worked with NCTC and intelligence community agencies to ensure that (1) the information that supported changing the watchlist status of the individuals was as complete and accurate as possible and (2) the individuals were placed in the TSDB and, when applicable, on the No Fly or Selectee lists, in accordance with standards and criteria for inclusion on these lists. Agencies that screen individuals against TSDB records are addressing vulnerabilities and gaps in processes that were exposed by the December 2009 attempted attack to enhance homeland security. For example, TSA actions have resulted in more individuals being denied boarding aircraft or subjected to enhanced screening before boarding. The number of U.S. persons (U.S. citizens and lawful permanent residents) denied boarding has also increased and, for such persons abroad, required the government to develop procedures to facilitate their return. TSA is also screening airline passengers against additional TSDB records to mitigate risks. CBP has implemented a program to build upon its practice of evaluating the risk posed by individuals attempting to enter the United States before they board flights bound for the United States. As a result, air carriers have permitted fewer individuals in the TSDB to board such flights, particularly nonimmigrant aliens. State took actions to revoke hundreds of U.S. visas immediately after the attempted attack because it determined that the individuals could present an immediate threat. These and other agency actions are intended to enhance homeland security, but no entity is routinely assessing governmentwide issues, such as how the changes have impacted agency resources and the traveling public, whether watchlist screening is achieving intended results, or if adjustments to agency programs or the watchlisting guidance are needed. After the attempted attack, TSA continued implementation of the Secure Flight program, which enabled TSA to assume direct responsibility for determining if individuals are matches to the No Fly or Selectee lists from air carriers. Secure Flight requires that air carriers collect—and that passengers provide—full name and date-of-birth and gender information, thereby improving TSA’s ability to correctly determine whether individuals are on these lists. Before Secure Flight, air carriers were not required to collect date-of-birth and gender information, and each airline conducted watchlist matching differently with varying effectiveness. According to TSA, the increase in individuals added to the No Fly and Selectee lists, combined with the implementation of Secure Flight, resulted in an increase in the number of times airlines encountered individuals on these lists. TSA data show that the encounters involved both domestic flights (flights to and from locations within the United States) and international flights (flights to or from the United States or over U.S. air space). Since the December 2009 attempted attack and subsequent increase in the number of U.S. persons nominated to and placed on the No Fly List, there have been instances when U.S. persons abroad have been unable to board an aircraft bound for the United States. Any individual— regardless of nationality—can be prohibited from boarding an aircraft if the threat represented by the individual meets the criteria for inclusion on the No Fly List. In general, however, U.S. citizens are permitted to enter the United States at a U.S. port of entry if they prove to the satisfaction of a CBP officer that they are in fact U.S. citizens. Lawful permanent residents, who in limited circumstances independent of the No Fly List may be rendered an applicant for admission, are usually entitled to removal proceedings prior to having their status as a lawful permanent resident terminated for immigration purposes. In our October 2007 watchlist report, we recommended that DHS assess to what extent security risks exist by not screening against more watchlist records and what actions, if any, should be taken in response. DHS generally agreed with our recommendation but noted that increasing the number of records that air carriers used to screen passengers would expand the number of misidentifications to unjustifiable proportions without a measurable increase in security. In general, misidentifications occur when a passenger’s name is identical or similar to a name in the TSDB but the passenger is not the individual on the watchlist. Since then, TSA assumed direct responsibility for this screening function through implementation of the Secure Flight program for all flights traveling to, from, or within the United States. According to TSA, Secure Flight’s full assumption of this function from air carriers and its use of more biographic data for screening have improved watchlist matching. This includes TSA’s ability to correctly match passenger data against TSDB records to confirm if individuals match someone on the watchlist and reduce the number of misidentifications. Appendix III contains additional information on how Secure Flight has reduced the likelihood of passengers being misidentified as being on the watchlist and related inconveniences. TSA’s actions discussed below fully respond to the recommendation we made in our October 2007 report. Specifically, TSA has implemented Secure Flight such that as circumstances warrant, it may expand the scope of its screening beyond the No Fly and Selectee lists to the entire TSDB. According to the program’s final rule, in general, Secure Flight is to compare passenger information only to the No Fly and Selectee lists because, during normal security circumstances, screening against these components of the TSDB will be satisfactory to counter the security threat. However, the rule also provides that TSA may use the larger set of “watch lists” maintained by the federal government when warranted by security considerations, such as if TSA learns that flights on a particular route may be subject to increased security risk. Also, after the attempted bombing in December 2009, DHS proposed and the Deputies Committee approved the Secure Flight program’s expanded use of TSDB records on a routine basis to screen passengers before they board flights. In April 2011, TSA completed the transition of the Secure Flight program to conduct watchlist matching against this greater subset of TSDB records and notify air carriers that those passengers who are determined to be a match should be designated for enhanced screening prior to boarding a flight. According to TSA, the impact on screening operations has been minimal given the relatively low volume of matches against these additional records each day. TSA noted that the entire TSDB is not used for screening since matching passenger data against TSDB records that contain only partial data could result in a significant increase in the number of passengers who are misidentified as being on the watchlist and potentially cause unwarranted delay or inconvenience to travelers. TSA also noted that as with potential misidentifications to the No Fly and Selectee lists, passengers who feel that they have been incorrectly delayed or inconvenienced can apply for redress through the DHS Traveler Redress Inquiry Program (DHS TRIP). DHS noted that TSA regularly monitors the Secure Flight program and processes and makes adjustments as needed. In fiscal year 2011, TSA reprogrammed $15.9 million into Secure Flight to begin screening against the additional TSDB records. TSA’s fiscal year 2012 budget request proposed funding to make screening against the additional records permanent. According to TSA, for fiscal year 2012, Secure Flight requested an increase of $8.9 million and 38 full-time personnel to continue supporting this expanded screening effort. According to TSA, the funding will be used for information technology enhancements that will be required to implement this expanded screening and will allow TSA to handle the increased workload. For individuals traveling by air to the United States, CBP has established programs whereby it assesses individuals before they board an aircraft to determine whether it is likely they will be found inadmissible at a port of entry. The following sections discuss how CBP’s Pre-Departure Targeting Program and Immigration Advisory Program handle the subset of travelers who are in the TSDB. Other high-risk and improperly documented passengers handled by these programs include passengers who have criminal histories; have had their visas revoked; are in possession of fraudulent, lost, or stolen passports; or otherwise appear to be inadmissible. In response to the attempted attack in December 2009, and as part of its border and immigration security mission, CBP implemented the Pre- Departure Targeting Program in January 2010 to build upon its process of assessing if individuals would likely be found inadmissible at a port of entry before they board an aircraft to cover all airports worldwide with direct flights to the United States. Before the attempted attack, CBP assessed individuals who were departing from airports that had CBP Immigration Advisory Program officers on site. At airports without such a program, passengers in the TSDB but not on the No Fly List generally were allowed to board flights and travel to U.S. airports. Upon arrival at a U.S. port of entry, CBP would inspect the passengers and determine their admissibility. CBP continues to assess passengers through the Immigration Advisory Program for flights departing from airports that have a program presence. For both the Pre-Departure Targeting Program and the Immigration Advisory Program, if CBP determines that a passenger would likely be deemed inadmissible upon arrival at a U.S. airport, it recommends that the air carrier not board that passenger (that is, it makes a no board recommendation). CBP generally makes these no board recommendations based on provisions for admissibility found in the Immigration and Nationality Act. U.S. citizens are generally not subject to these recommendations since they are generally permitted to enter the United States at a U.S. port of entry if they prove to the satisfaction of a CBP officer that they are in fact U.S. citizens. CBP may also decide to not issue such recommendations for aliens in the TSDB if, for example (1) CBP officers determine that, based on a review of all available information, the individual is not likely to be denied admission to the United States, or (2) the individual was granted a waiver of inadmissibility by DHS, if such a waiver is available. For flights departing from airports without an Immigration Advisory Program officer on site, CBP is leveraging the capabilities of its officers within its Regional Carrier Liaison Groups to issue no board recommendations to air carriers. These groups were established in 2006 to assist air carriers with U.S. entry-related matters—with a primary focus on verifying the authenticity of travel documents—and to work directly with commercial air carriers on security-related matters. Regional Carrier Liaison Group staff who are located in the United States handle Pre- Departure Targeting Program no board recommendations to air carriers remotely by delivering the recommendations via phone, fax, or e-mail. CBP policy instructs staff to give no board recommendations priority over other duties, given the time and security sensitivities involved. There are three Regional Carrier Liaison Groups, which are located in Honolulu, Hawaii; Miami, Florida; and New York City, New York. Each of the three locations has authority over a region of the world, with the Honolulu location covering U.S.-bound flights from Asia and the Pacific; the New York City location covering flights from Africa, Europe, and the Middle East; and the Miami location covering flights from Latin America and the Caribbean. United States who does not have a valid passport and visa, if a visa is required. When CBP does not recommend that an individual in the TSDB be denied boarding and the passenger boards a flight bound for the United States, CBP inspects the passenger upon arrival at a U.S. airport. For aliens seeking admission to the United States, determinations on admissibility are generally made by CBP officers during this inspection in accordance with applicable provisions of the Immigration and Nationality Act. In general, aliens who are deemed inadmissible are detained by DHS until the individual can board a return flight home. Since the attempted attack, CBP predeparture vetting programs—the Pre-Departure Targeting Program and the Immigration Advisory Program—have resulted in hundreds more aliens being kept off flights bound for the United States because CBP determined that they likely would be deemed inadmissible upon arrival at a U.S. airport and made corresponding no board recommendations to air carriers. In addition to the increase in no board recommendations that resulted from implementing the Pre-Departure Targeting Program in January 2010, the increase during 2010 was in response to the new threats made evident by the attempted attack, according to CBP officials. CBP data also show that there have been instances when individuals have boarded flights bound for the United States and arrived at U.S. airports. According to CBP officials, the vast majority of these cases involved either (1) U.S. citizens and lawful permanent residents who generally may enter the United States, and therefore, CBP generally does not recommend that air carriers not board these passengers, or (2) aliens in the TSDB who were deemed inadmissible but were granted temporary admission into the United States under certain circumstances, such as DHS granting a waiver of inadmissibility. At the time of our review, CBP did not have readily available data on how often aliens in the TSDB boarded flights bound for the United States— information that could help CBP assess how its predeparture programs are working and provide transparency over program results, among other things. According to CBP officials, the agency was working on adding data fields to CBP systems to capture more information related to these programs. The officials noted that these changes will allow CBP to break down and retrieve data by U.S. citizens, lawful permanent residents, and aliens, and that related reports will be produced. At our request, CBP conducted a manual review of data it compiled on the results of its processing of passengers at U.S. airports from April 2010 through September 2010. During this period, CBP data show that there were instances when aliens in the TSDB boarded flights bound for the United States and were admitted into the country. These occurrences are in addition to instances where aliens in the TSDB were able to board flights bound for and enter the United States because they had been granted admission to the country on a temporary basis under certain circumstances, such as by DHS granting a waiver of inadmissibility. According to CBP officials, for each of these occurrences, CBP officers determined—based on a review of all available and relevant information—that the derogatory information on the individual was not sufficient to render that person inadmissible under the Immigration and Nationality Act. CBP officials stated that the Pre-Departure Targeting Program increased the workload for Regional Carrier Liaison Group staff and that two of the three groups increased the number of CBP officers assigned to handle this workload. facility that supports these programs experienced increased workloads, which they handled through additional hiring, overtime hours, and assignment of temporary duty personnel. Regional Carrier Liaison Group positions are not specifically funded but are staffed from existing CBP port personnel, with CBP port management determining the staffing levels required at each location. individual any law enforcement sensitive information. Rather, the CBP officers or air carrier personnel are to advise the individual to go to the U.S. consulate or the person’s home country passport office, as appropriate, to address the issue. CBP officials also noted that individuals who have travel-related concerns are advised to file an inquiry through DHS TRIP. According to DHS TRIP officials, about 20 percent of all requests for redress that it receives involve CBP inspections conducted at land, sea, or air ports of entry. After the December 2009 attempted attack, the Executive Office of the President directed TSC to determine the visa status of all known or suspected terrorists in the TSDB. TSC then worked with State to determine whether individuals who held U.S. visas should continue holding them in light of new threats made evident by the incident. Specifically, in January 2010, State revoked hundreds of visas because it determined that the individuals could present an immediate threat to the United States. State officials noted that these revocations were largely related to individuals who were added to the TSDB—or moved to the No Fly or Selectee lists—after the attempted attack based on new intelligence and threat information. In March 2010, TSC and State initiated another review and identified hundreds of cases in which individuals in the TSDB held U.S. visas. These cases included individuals who were in the TSDB at the time of the December 2009 attempted attack but did not have their visas revoked during the January 2010 review. According to State officials, all individuals who could present an immediate threat to the United States had their visas revoked within 24 hours. In cases involving a less clear nexus to terrorism, the officials noted that visas were not immediately revoked. The officials explained that investigating these cases can take several months and involve extensive coordination with law enforcement and intelligence officials. According to State officials, of these remaining cases, the department revoked a number of visas based on intelligence community recommendations and determined that other visas had been issued properly following the completion of an interagency review process and, in applicable cases, ineligibility waivers provided by DHS. Regarding the cases in which State determined that individuals could continue to hold visas, State officials noted that an individual’s presence in the TSDB does not itself render that person ineligible for a visa. For example, State will issue a visa if it determines that the available information supporting the TSDB record does not meet the statutory conditions under which an individual may be deemed ineligible for a visa to the United States, and the individual is not otherwise ineligible for a visa. The officials added that in those instances where State finds that an individual is ineligible for a visa—based on provisions in the Immigration and Nationality Act that define terrorist activities—the department may still, in certain circumstances, issue a visa if DHS agrees to grant a waiver of inadmissibility, if such a waiver is available. According to State officials, reasons an individual found ineligible for a visa may receive a waiver include significant or compelling U.S. government interests or humanitarian concerns. According to State officials, while the department consulted with law enforcement and intelligence community officials regarding whether to revoke the visas, State has final authority over all visa decisions. In addition to the hundreds of visa revocations involving individuals in the TSDB that were related to the reviews directed by the Executive Office of the President, State data show that the department revoked hundreds more visas based on terrorism-related grounds during 2010. The total number of visas State revoked during 2010 was more than double the number of visas the department revoked based on terrorism-related grounds during 2009. According to State, as of May 2011, a number of individuals in the TSDB continued to hold U.S. visas because the department found that (1) they were ineligible to hold a visa under the terrorism-related provisions of the Immigration and Nationality Act but received waivers of that ineligibility or (2) they were not ineligible to hold visas under the terrorism-related provisions of the act following standard interagency processing of the visa applications. Under current procedures, State screens visa applicant data against sensitive but unclassified extracts of biographical information drawn from TSDB records as part of its evaluation process for issuing U.S. visas. If an applicant for a visa is identified as a possible match with a TSDB record, consular officers are to initiate a process to obtain additional information on the individual’s links to terrorism, including information maintained by law enforcement and intelligence agencies. State data show that the department denied about 55 percent more nonimmigrant visas based on terrorism-related grounds during 2010 than it did during 2009, which includes denials involving individuals in the TSDB. Further, State found that in cases where individuals were ineligible to hold nonimmigrant visas based on terrorism-related grounds—but evinced significant or compelling U.S. government interest or humanitarian concern—the department recommended, and DHS granted, waivers of ineligibility. According to State officials, the department’s automated systems do not capture data on the number of individuals in the TSDB who applied for visas—or the related outcomes of these applications (e.g., issued or denied)—because this information is not needed to support the department’s mission. State officials noted that it would be costly to change department databases to collect information specific to individuals applying for visas who are in the TSDB, but the department is working with TSC on a process to make these data more readily available through other means. State is also partnering with other agencies to develop a new, more automated process for reviewing visa applications that is intended to be more efficient than the current process. The new process is also intended to help minimize the inconvenience of protracted visa processing times for applicants incorrectly matched to TSDB records, among other things. Since the December 2009 attempted attack, agencies have taken actions to strengthen their respective processes for screening and vetting individuals against TSDB records. However, no entity has acknowledged that it is responsible and accountable for routinely conducting governmentwide assessments of how agencies are using the watchlist to make screening or vetting decisions and related outcomes or the overall impact screening or vetting programs are having on agency resources and the traveling public. Also, no entity is assessing whether watchlist- related screening or vetting is achieving intended results from a policy perspective, or if adjustments to agency programs or the watchlisting guidance are needed. Further, no entity is routinely collecting and analyzing data needed to conduct such governmentwide assessments over time. According to the TSC Director, conducting such assessments and developing related metrics will be important in the future. The actions screening and law enforcement agencies have taken since the attempted attack have resulted in more individuals in the TSDB being denied boarding flights, being deemed inadmissible to enter the United States, and having their U.S. visas revoked, among other things. These outcomes demonstrate the homeland security benefits of watchlist-related screening or vetting, but such screening or vetting and related actions have also had impacts on agency resources and the traveling public. For example, new or expanded screening and vetting programs have required agencies to dedicate more staff to check traveler information against TSDB records and take related law enforcement actions. Also, any new or future uses of the watchlist for screening or vetting may result in more individuals being misidentified as the subject of a TSDB record, which can cause traveler delays and other inconveniences. Agencies are independently taking actions to collect information and data on the outcomes of their screening or vetting programs that check against TSDB records, but no entity is routinely assessing governmentwide issues, such as how U.S. citizens and lawful permanent residents are being affected by screening or the overall levels of misidentifications that are occurring. Routinely assessing these outcomes and impacts governmentwide could help decision makers determine if the watchlist is achieving its intended results without having unintended consequences or needs further revisions. Because watchlist-related screening or vetting is a governmentwide function, any effort to assess the overall outcomes and impacts must be an interagency effort. The federal government has established interagency working groups to address screening and related issues. However, according to agency officials we contacted, these groups have not conducted governmentwide assessments because they have been focused on implementing new or expanding screening or vetting programs and revising related policies and procedures, among other things. Similar to watchlisting issues, in August 2011, a representative of the National Security Staff informed us that the Information Sharing and Access Interagency Policy Committee recently began performing an assessment to support its oversight of new screening processes. The representative noted that the depth and frequency of specific reviews will vary as necessary and appropriate. The staff did not provide us details on these efforts, so we could not determine to what extent the assessments will be routine or involve collecting and analyzing data needed to conduct such governmentwide assessments over time. As discussed previously, the President tasked the Assistant to the President for Homeland Security and Counterterrorism to be responsible and accountable for ensuring that agencies carry out actions to strengthen the watchlisting process after the December 2009 attempted attack. As such, the Assistant to the President may be best positioned to ensure that governmentwide assessments of the outcomes and impacts of agency screening programs are conducted. According to Standards for Internal Control in the Federal Government, ongoing monitoring of programs and activities should occur during the course of normal operations. These standards also note that performance data on agency programs be available as a means to hold public service organizations accountable for their decisions and actions, including stewardship of public funds, fairness, and all aspects of performance. Routine, governmentwide assessments of screening agency programs could help the government determine if the watchlist is achieving its intended results, identify broader issues that require attention, and improve transparency and provide an accurate accounting to the Executive Office of the President and other stakeholders, including Congress, for the resources invested in screening processes. The attempt on December 25, 2009, to detonate a concealed explosive on board a U.S.-bound aircraft highlights the importance of the U.S. government placing individuals with known or suspected ties to terrorism on its watchlist. The Executive Office of the President’s review of the attempted attack found that the U.S. government had sufficient information to have uncovered and potentially disrupted the attempted attack, but shortcomings in the watchlisting process prevented the attempted bomber from being nominated for inclusion on the watchlist. The July 2010 Watchlisting Guidance includes changes that were intended to address weaknesses in the nominations process. Since the guidance was approved, agencies have expressed concerns about the increasing volumes of information and related challenges in processing this information. The federal entities involved in the nominations process are taking actions to address challenges related to implementing the guidance. However, no single entity is routinely assessing the overall impacts of the watchlisting guidance or the steps taken to strengthen the nominations process. Working collaboratively to ensure that the watchlisting community periodically evaluates or assesses the impacts of the revised guidance on the watchlisting community could (1) help decision makers determine if the guidance is achieving its intended outcomes or needs any adjustments, (2) inform future efforts to strengthen the watchlisting process, (3) help the watchlisting community understand longer-term impacts of changes to the watchlisting guidance, and (4) improve transparency and provide an accurate accounting to the Executive Office of the President and other stakeholders, including Congress, for the resources invested in the watchlisting process. Just as agencies are not routinely assessing the impacts of the revisions made to the watchlisting guidance or the steps taken to strengthen the nominations process, no single entity is routinely assessing information or data on the collective outcomes or impacts of agencies’ watchlist screening operations to determine the effectiveness of changes made to strengthen screening since the attempted attack or how changes to the watchlisting guidance have affected screening operations. Routine, governmentwide assessments of the outcomes and impacts of agencies’ watchlist screening or vetting programs could help ensure that these programs are achieving their intended results or identify if revisions are needed. Such assessments could also help identify broader issues that require attention, determine if impacts on agency resources and the traveling public are acceptable, and communicate to key stakeholders how the nation’s investment in the watchlist screening or vetting processes is enhancing security of the nation’s borders, commercial aviation, and other security-related activities. To help inform future efforts to strengthen watchlisting and screening processes, we recommend that the Assistant to the President for Homeland Security and Counterterrorism establish mechanisms or use existing interagency bodies to routinely assess how the watchlisting guidance has impacted the watchlisting community—including its capacity to submit and process nominations in accordance with provisions in the guidance—and whether any adjustments to agency programs or the guidance are needed, and whether use of the watchlist during agency screening processes is achieving intended results, including whether the overall outcomes and impacts of screening on agency resources and the traveling public are acceptable and manageable or if adjustments to agency programs or the watchlisting guidance are needed. We provided a draft of the classified version of this report for comment to the National Security Staff; the Office of the Director of National Intelligence; the Departments of Defense, Homeland Security, Justice, and State; and the CIA. In its written comments, DHS noted that it appreciated the report’s identification of enhancements the department has made to several screening programs to address vulnerabilities exposed by the December 25, 2009, attempted attack, including actions taken by CBP and TSA. DHS also noted that it is committed to working with interagency stakeholders, including the Interagency Policy Committee, to ensure that its use of the watchlist in its screening programs is achieving intended results. DHS also provided technical comments, in addition to its written comments. National Security Staff; the Office of the Director of National Intelligence; and the Departments of Defense, Justice, and State did not provide written comments to include in this report, but provided technical comments, which we have incorporated in this report where appropriate. The CIA did not provide any comments. We are sending copies of this report to National Security Staff; the Attorney General; the Secretaries of the Departments of Defense, Homeland Security, and State; the Directors of National Intelligence and Central Intelligence; and appropriate congressional committees. This report is also available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact Eileen R. Larence at (202) 512-6510 or [email protected]. Key contributors to this report are acknowledged in appendix V. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Our reporting objectives were to determine (1) the actions the federal government has taken since the December 25, 2009, attempted attack to strengthen the watchlist nominations process, the extent to which departments and agencies are experiencing challenges in implementing revised watchlisting guidance, and the extent to which agencies are assessing impacts of the actions they have taken; (2) how the composition of the watchlist has changed as a result of actions taken by departments and agencies after the attempted attack; and (3) how screening and law enforcement agencies are addressing vulnerabilities exposed by the attempted attack as well as the outcomes of related screening, and to what extent federal agencies are assessing the impacts of this screening. In general, we focused on the federal entities that were tasked by the Executive Office of the President to take corrective actions after the attempted attack: the Department of Homeland Security (DHS); Department of Justice’s Federal Bureau of Investigation (FBI) and Terrorist Screening Center (TSC); Department of State (State); Department of Defense; Office of the Director of National Intelligence’s National Counterterrorism Center (NCTC); Central Intelligence Agency (CIA); and Executive Office of the President’s National Security Staff. To determine actions the federal government has taken to strengthen the watchlist nominations process, we analyzed postattack government reports, including reports issued by the Executive Office of the President and the Senate Select Committee on Intelligence. We analyzed the Watchlisting Guidance that was approved in July 2010 and compared it to the February 2009 watchlisting protocol—the last version that was published before the attempted attack—to identify changes that were intended to strengthen agencies’ abilities to nominate known or suspected terrorist to the watchlist. We interviewed officials from five entities that nominate individuals for inclusion on the terrorist watchlist, as well as NCTC’s Deputy Director for the Terrorist Identities and TSC’s Director. We also met with officials from the Executive Office of the President’s Information Sharing and Access Interagency Policy Committee and its Subcommittee on Watchlisting, which provides an interagency forum to which agencies can bring watchlist-related issues for discussion and resolution. To identify to what extent agencies are experiencing challenges implementing changes to the watchlisting guidance, we analyzed data and documentation provided by seven federal entities involved in the nominations process—such as nominations data for the period January 2009 through May 2011—as well as the congressional testimony of NCTC, TSC, and FBI leadership and program directors. We also interviewed the watchlisting unit directors and program staff at each of the five nominating agencies, NCTC’s Deputy Director for the Terrorist Identities, and the TSC Director to discuss their nominations processes, the number of nominations they send to NCTC, and how, if at all, the changes to the nominations process have created challenges for each agency. To determine to what extent agencies are assessing for the impacts of the actions they have taken, we interviewed officials from five federal entities who participate in the Information Sharing and Access Interagency Policy Committee’s Subcommittee on Watchlisting and related working groups. To identify how the composition of the watchlist has changed since the attempted attack, we reviewed TSC data from late December 2009 through March 2010 on the number of individuals who were added to TSC’s Terrorist Screening Database and its subset No Fly and Selectee lists that are used to screen airline passengers before boarding, and related efforts to determine whether the individuals should remain on these lists. To identify broader trends in the size and composition of the watchlist and subset lists, we reviewed TSC monthly data for 2009 and 2010 on the number of individuals on these lists, including U.S. citizens and lawful permanent residents. We also determined how the revised watchlisting guidance has impacted the size of these lists. Further, we interviewed senior-level officials from TSC and NCTC to identify factors that contributed to trends in the size of the lists during 2009 and 2010, and to obtain their perspectives on how changes in the watchlist guidance had impacted growth in the lists. To identify how screening and law enforcement agencies have addressed vulnerabilities exposed by the attempted attack and how they are assessing the outcomes and impacts of screening or vetting, we focused on the departments and agencies that use the watchlist to screen individuals traveling to the United States—the Transportation Security Administration (TSA), which screens passengers before they board aircraft; U.S. Customs and Border Protection (CBP), which inspects travelers to determine their admissibility into the United States; and State, which screens individuals who apply for U.S. visas. To determine agency actions to address vulnerabilities in screening or vetting and related outcomes, we analyzed TSA, CBP, and State documentation—such as documents that discuss new or expanded screening programs—as well as testimonies and inspector general reports. We obtained data—generally for 2009 and 2010 but in some cases through May 2011—on how often these agencies have encountered individuals on the watchlist and the outcomes of these encounters to help determine what impact changes in agency screening or vetting procedures has had on operations and the traveling public, among other things. We also interviewed senior-level officials from these agencies; these interviews included discussions about how agencies’ screening or vetting procedures have changed since the attempted attack and how they are assessing the impacts of the changes. Further, to better understand the impacts of watchlist screening or vetting on the traveling public, we analyzed data for 2009 and 2010 on individuals who had inquiries or sought resolution regarding difficulties they experienced during their travel-related screening or inspection and interviewed DHS officials who are responsible for providing redress for these individuals. Regarding federal government efforts to assess the outcomes and impacts of actions agencies have taken to strengthen screening or vetting processes since the December 2009 attempted attack, we obtained information on the extent to which federal monitoring activities and practices are consistent with GAO’s Standards for Internal Control in the Federal Government. To assess the reliability of data on watchlist nominations, number of watchlist records in databases, and screening outcomes, we interviewed knowledgeable officials about the data and the systems that produced the data, reviewed relevant documentation, examined data for obvious errors, and (when possible) corroborated the data among the different agencies. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from February 2010 to May 2012 in accordance with generally accepted government auditing standards. to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We issued a classified report on this work in December 2011. Pursuant to Homeland Security Presidential Directive 6, the Terrorist Screening Center (TSC) was established to develop and maintain the U.S. government’s consolidated watchlist—the Terrorist Screening Database (TSDB)—and to provide for the use of watchlist records during security-related screening processes. The watchlisting and screening processes are intended to support the U.S. government’s efforts to combat terrorism by consolidating the terrorist watchlist and providing screening and law enforcement agencies with information to help them respond appropriately during encounters with known or suspected terrorists, among other things. TIDE is the U.S. government’s central repository of information on known or suspected international terrorists and is maintained by NCTC. benefit, false documentation or identification, weapons, explosives, or training; or are members of or represent a foreign terrorist organization. In general, nominating agencies submit terrorism-related information to NCTC to add information to existing records in TIDE as well as to nominate new individuals to be included in TIDE, with the additional purpose of nominating known or suspected terrorists to the TSDB. Nominations are to include pertinent derogatory information and any biographic information—such as name and date of birth—needed to establish the identity of individuals on the watchlist. The FBI provides TSC with information about known or suspected domestic terrorists. According to the FBI’s Domestic Terrorist Operations Unit, domestic terrorists engage in activities that (1) involve acts dangerous to human life that are a violation of the criminal laws of the United States or any state; (2) appear to be intended to intimidate or coerce a civilian population, influence the policy of a government by intimidation or coercion, or affect the conduct of a government by mass destruction, assassination, or kidnapping; and (3) occur primarily within the jurisdiction of the United States. In general, the FBI nominates individuals who are subjects of ongoing FBI counterterrorism investigations to TSC for inclusion in the TSDB, including persons the FBI is preliminarily investigating to determine if they have links to terrorism. In determining whether to open an investigation, the FBI uses guidelines established by the Attorney General, which contain specific standards for opening investigations. The FBI also has a process for submitting requests to NCTC to nominate known or suspected international terrorists who are not subjects of FBI investigations. In accordance with Homeland Security Presidential Directive 6—and built upon through Homeland Security Presidential Directives 11 and 24—the TSDB is to contain information about individuals known or suspected to be or have been engaged in conduct constituting, in preparation for, in aid of, or related to terrorism and terrorist activities. NCTC and the FBI apply a reasonable-suspicion standard to determine which individuals are appropriate for inclusion in the TSDB. Determining whether individuals meet this standard, however, can involve some level of subjectivity. NCTC and the FBI are to consider information from all available sources and databases—including information forwarded by nominating agencies as well as information in their own holdings—to determine if there is a reasonable suspicion of links to terrorism that warrants a nomination. Once NCTC and the FBI determine that an individual meets the reasonable-suspicion standard and that minimum biographic information exists, they extract sensitive but unclassified information on the individual’s identity—such as name and date of birth—from their classified databases and send the information to TSC. TSC reviews these nominations—evaluating the derogatory and biographic information, in accordance with the watchlisting guidance—to determine whether to add nominated individuals to the TSDB. As TSC adds individuals to the watchlist, the list may include persons with possible ties to terrorism in addition to people with known links, thereby establishing a broad spectrum of individuals who are considered known or suspected terrorists. Figure 2 provides an overview of the process used to nominate individuals for inclusion in the TSDB. Consistent with Homeland Security Presidential Directive 6, to ensure that watchlist information is current, accurate, and complete, nominating agencies generally are to provide information to remove an individual from the watchlist when it is determined that no nexus to terrorism exists. To support agency screening or law enforcement processes, TSC sends applicable records from the TSDB to screening or law enforcement agency systems for use in efforts to deter or detect the movement of known or suspected terrorists. For instance, applicable TSC records are provided to TSA for use in screening airline passengers, to U.S. Customs and Border Protection (CBP) for use in vetting and inspecting persons traveling to and from the United States, and to State for use in screening visa applicants. Regarding individuals who are not citizens or nationals of the United States seeking travel to and entry into the United States, screening and law enforcement agencies rely on immigration laws that specify criteria and rules for deciding whether to issue visas to individuals or to admit them into the country. In many instances, individuals who are not citizens or nationals of the United States who have engaged in or are likely to engage in terrorist-related activities may be ineligible to receive visas or inadmissible for entry to the United States, or both. If a foreign citizen is lawfully admitted into the United States—either permanently or temporarily—and subsequently engages in or is likely to engage in a terrorist activity, the individual, in certain circumstances, may be removed to his or her country of citizenship. U.S. citizens returning to the United States from abroad are not subject to the admissibility requirements of the Immigration and Nationality Act, regardless of whether they are subjects of watchlist records. In general, these individuals only need to establish their U.S. citizenship to the satisfaction of the examining officer—by, for example, presenting a U.S. passport—to obtain entry into the United States. U.S. Citizens are subject to inspection by CBP before being permitted to enter, and additional actions may be taken, as appropriate. This appendix presents an overview of the Transportation Security Administration’s (TSA) Secure Flight program, which began implementation before the December 25, 2009, attempted attack and is a key part of TSA’s efforts to address vulnerabilities that were exposed by the incident. This appendix also discusses how the program has reduced the likelihood of passengers misidentified as being on the watchlist and provides an update on the status of TSA efforts to validate the information that passengers report when making a reservation that is used in the watchlist-matching process. The matching of airline passenger information against terrorist watchlist records is a frontline defense against acts of terrorism that target the nation’s civil aviation system. In general, passengers identified by the TSA as a match to the No Fly List are prohibited from boarding flights to, from, and within the United States, while those matched to the Selectee List are required to undergo additional screening prior to boarding such flights. Historically, airline passenger prescreening was performed by air carriers pursuant to federal requirements. However, in accordance with the Intelligence Reform and Terrorism Prevention Act of 2004, TSA developed an advanced passenger prescreening program known as Secure Flight that enabled TSA to assume from air carriers the function of watchlist matching. Secure Flight is intended to eliminate inconsistencies in passenger watchlist matching procedures conducted by air carriers and use a larger set of watchlist records when warranted, reduce the number of individuals who are misidentified as being on the No Fly or Selectee lists, reduce the risk of unauthorized disclosure of sensitive watchlist information, and integrate information from DHS’s redress process into watchlist matching so that individuals are less likely to be improperly or unfairly delayed or prohibited from boarding an aircraft. In January 2009, the Secure Flight program began initial operations— assuming the watchlist-matching function for a limited number of domestic flights for one airline—and subsequently phased in additional flights and airlines. TSA completed assumption of this function for all domestic and international flights operated by U.S. air carriers in June 2010 and completed assumption of this function for covered foreign air carriers flying to and from the United States in November 2010. Since the December 2009 attempted attack, TSA has completed its assumption of the watchlist-matching function from air carriers—under the Secure Flight program—which has reduced the likelihood of passengers misidentified as being on the watchlist. According to TSA data, Secure Flight is consistently clearing over 99 percent of passengers automatically (less than 1 percent of passengers are being misidentified as being on the No Fly List or Selectee List). When misidentifications occur, a passenger may not be able to print a boarding pass from a computer or an airport kiosk. Rather, the individual may have to go to the airline ticket counter to provide identifying information that is used to determine if the person is a positive match to the No Fly List or Selectee List. Before Secure Flight, more passengers had to go through this process to verify their identities, since each airline conducted watchlist matching differently with varying effectiveness. The Secure Flight program increases the effectiveness of watchlist matching, applying an enhanced watchlist-matching system and process consistently across the airline industry. Under Secure Flight, air carriers are required to (1) collect full name and date-of-birth and gender information from airline passengers and (2) be capable of collecting redress control numbers from passengers.helps reduce misidentifications. According to TSA, Secure Flight is required to submit an annual report to the Office of Management and Budget certifying that the program has met its baseline goal for reducing misidentifications. Further, people who have been denied or delayed airline boarding, have been denied or delayed entry into or exit from the United States at a port of entry or border crossing; or have been repeatedly referred to additional (secondary) inspection can file an inquiry to seek redress. After completing the redress process—which includes submitting all applicable documents—an individual will receive a redress control number that may facilitate future travel. For example, airline passengers who have completed the redress process and are determined by DHS as not being the subject of a watchlist record are put on the department’s list of individuals who are “cleared” to travel. Using the redress control number when making reservations for future travel may help to prevent misidentifications. To mitigate future risks of performance shortfalls and strengthen management of the Secure Flight program moving forward, in May 2009, we recommended that TSA periodically assess the performance of the Secure Flight system’s matching capabilities and results to determine whether the system is accurately matching watchlisted individuals while minimizing the number of false positives, consistent with the goals of the program; document how this assessment will be conducted and how its results will be measured; and use these results to determine whether the system settings should be modified. TSA’s actions discussed below fully respond to the recommendation we made in our May 2009 report. TSA has developed performance measures to report on and monitor Secure Flight’s name matching capabilities. According to TSA, Secure Flight leadership reviews the daily reports, which reflect quality, match rate, false positive rates, and other metrics. Reviews are to include analysis, discussion with program leadership, and identification of process and data quality improvements to increase efficiency and reduce possible false positive matches to the watchlist. In addition, DHS established a multidepartmental Match Review Board Working Group and a Match Review Board to, among other things, review the performance measures and recommend changes to improve system performance. According to TSA, the working group meets on a biweekly basis and the board meets monthly, or as required, to review working group findings and to make system change recommendations. For example, the board has recommended changes in the threshold used for determining whether an individual is a match to a watchlist record and has decided to implement additional search tools to enhance Secure Flight’s automated name-matching capabilities. Furthermore, TSA plans to periodically assess the extent to which the Secure Flight program fails to identify individuals who are actual matches to the watchlist. The DHS Traveler Redress Inquiry Program (DHS TRIP) is a single point of contact for individuals who have inquiries or seek resolution regarding difficulties they experienced during their travel screening at transportation hubs—like airports and train stations—or crossing U.S. borders, including watchlist issues; inspection problems at ports of entry; and situations where travelers believe they have been unfairly or incorrectly delayed, denied boarding, or identified for additional screening or inspection at our nation’s transportation hubs. While serving as the point of contact for the receipt, tracking, and response to redress applications, DHS TRIP generally refers cases to the appropriate screening agency for review and adjudication. According to DHS TRIP officials, since the December 2009 attempted attack, the office implemented a new procedure to ensure that (1) the office is promptly notified when an individual who is determined by DHS TRIP as not being the subject a watchlist record—and, therefore, has been put on the department’s list of individuals who are “cleared” to travel—is subsequently added to the watchlist and (2) redress applicants are provided additional information regarding the resolution of their cases. Prior to the attempted attack, DHS TRIP would conduct electronic comparisons once each day to ensure that someone who had been cleared as a result of the redress process had not subsequently been added to the watchlist. Since the attempted attack, DHS TRIP now conducts continuous checks (on a 24/7 basis) of cleared individuals against the watchlist every time the watchlist is updated. According to DHS TRIP officials, this change provides the office immediate notification if an individual who is cleared through the redress process is subsequently added to the watchlist. In turn, DHS TRIP officials can alert screening agencies more quickly that an individual should not be cleared if encountered during screening. Separately, DHS TRIP—at the direction of the Secretary of Homeland Security and in partnership with the Terrorist Screening Center (TSC), Departments of Justice and State, Federal Bureau of Investigation (FBI), and other members of the interagency redress community—has taken steps intended to help provide transparency to redress applicants regarding the resolution of their cases. According to DHS TRIP data, individuals submitted approximately 32,000 applications for redress during 2009 and 36,000 applications during 2010. The DHS TRIP redress application asks travelers to identify their areas of concern, but the information collected generally does not allow DHS TRIP officials to determine if individuals were misidentified as being on the watchlist. DHS TRIP officials explained that since the application allows travelers to list multiple reasons for applying—and the individuals generally do not know why they were subject to additional screening, inspection, or delay—the office cannot conclude with certainty that being misidentified as being on the watchlist was the cause of an applicant’s inconvenience. In late 2009, as part of the rollout of TSA’s Secure Flight program, several air carriers instituted a public awareness campaign encouraging travelers to submit redress inquiries if they believed that they have been misidentified in the past. Finally, DHS TRIP officials noted that individual screening and law enforcement agencies are in the best position to understand if their screening and law enforcement systems and procedures incorrectly identify individuals as matches with watchlist records. The officials explained that these agencies have access to more detailed records that would identify reasons for a delay or inconvenience, including a misidentification to the watchlist. According to DHS TRIP, less than 1 percent of individuals who apply for redress have been confirmed matches to the watchlist or have identifying information (e.g., name and date of birth) that closely matches someone on the watchlist. In such cases, DHS TRIP forwards the inquiry to TSC for resolution. TSC data show that the government has procedures in place to review the information that supports a watchlist record upon receipt of a redress inquiry and has revised the watchlist status of individuals based on these reviews. We did not review the effectiveness of these procedures. In addition to the contact named above, Eric Erdman, Assistant Director; Mona Blake; Jeffrey DeMarco; Michele Fejfar; Lisa Humphrey; Richard Hung; Thomas Lombardi; Linda Miller; Victoria Miller; Jan Montgomery; Timothy Persons; and Michelle Woods made key contributions to this report.
|
The December 25, 2009, attempted bombing of Northwest Flight 253 exposed weaknesses in how the federal government nominated individuals to the terrorist watchlist and gaps in how agencies used the list to screen individuals to determine if they posed a security threat. In response, the President tasked agencies to take corrective actions. GAO was asked to assess (1) government actions since the incident to strengthen the nominations process, (2) how the composition of the watchlist has changed based on these actions, and (3) how agencies are addressing gaps in screening processes. GAO analyzed government reports, the guidance used by agencies to nominate individuals to the watchlist, data on the volumes of nominations from January 2009 through May 2011, the composition of the list, and the outcomes of screening agency programs. GAO also interviewed officials from intelligence, law enforcement, and screening agencies to discuss changes to policies, guidance, and processes and related impacts on agency operations and the traveling public, among other things. This report is a public version of the classified report that GAO issued in December 2011 and omits certain information, such as details on the nominations guidance and the specific outcomes of screening processes. In July 2010, the federal government finalized guidance to address weaknesses in the watchlist nominations process that were exposed by the December 2009 attempted attack and to clarify how agencies are to nominate individuals to the watchlist. The nominating agencies GAO contacted expressed concerns about the increasing volumes of information and related challenges in processing this information. Nevertheless, nominating agencies are sending more information for inclusion in the terrorist watchlist after the attempted attack than before the attempted attack. Agencies are also pursuing staffing, technology, and other solutions to address challenges in processing the volumes of information. In 2011, an interagency policy committee began an initiative to assess the initial impacts the guidance has had on nominating agencies, but did not provide details on whether such assessments would be routinely conducted in the future. Routine assessments could help the government determine the extent to which impacts are acceptable and manageable from a policy perspective and inform future efforts to strengthen the nominations process. After the attempted attack, federal agencies took steps to reassess the threat posed by certain individuals already identified in government databases and either add them to the watchlist or change their watchlist status, which included adding individuals to the watchlist’s aviation-related subset lists. For example, the number of U.S. persons (U.S. citizens and lawful permanent residents) on the subset No Fly List the government uses to deny individuals the boarding of aircraft more than doubled after the attempted attack. Screening agencies are addressing gaps in processes that were exposed by the attempted attack. For example, based on the growth of lists used to screen aviation passengers and continued implementation of Secure Flight—which enabled the Transportation Security Administration to assume direct responsibility for conducting watchlist screening from air carriers—more individuals have been denied boarding aircraft or subjected to additional physical screening before boarding. Secure Flight has also reduced the likelihood of passengers being misidentified as being on the watchlist and has allowed agencies to use a broader set of watchlist records during screening. U.S. Customs and Border Protection has built upon its practice of evaluating individuals before they board flights to the United States, resulting in hundreds more non-U.S. persons on the watchlist being kept off flights because the agency determined they would likely be deemed inadmissible upon arrival at a U.S. airport. The Department of State revoked hundreds of visas shortly after the attempted attack because it determined that the individuals could present an immediate threat to the United States. These actions are intended to enhance homeland security, but have also impacted agency resources and the traveling public. An interagency policy committee is also assessing the outcomes and impacts of these actions, but it did not provide details on this effort. Routine assessments could help decision makers and Congress determine if the watchlist is achieving its intended outcomes and help information future efforts. GAO recommends that the Assistant to the President for Homeland Security and Counterterrorism ensure that the outcomes and impacts of agencies’ actions to strengthen nominations and screening processes are routinely assessed. Technical comments were provided and incorporated.
|
SBI is a comprehensive, multiyear, multibillion dollar program established in November 2005 by the Secretary of Homeland Security to secure U.S. borders and reduce illegal immigration. SBI’s mission is to promote border security strategies that help protect against and prevent terrorist attacks and other transnational crimes. Elements of SBI will be carried out by several organizations within DHS. One element of SBI is SBInet, the program within CBP that is responsible for developing a comprehensive border protection system. The SBInet program is managed by the SBInet Program Management Office (PMO). The PMO reports to the CBP SBI Program Executive Director. SBInet is a large and complex program that is responsible for leading the effort to ensure that the proper mix of personnel, tactical infrastructure, rapid response capability, and technology is deployed along the border. DHS defines control of the U.S. border as the ability to detect illegal entries into the United States, identify and classify these entries to determine the level of threat involved, efficiently and effectively respond to these entries, and bring events to a satisfactory law enforcement resolution. SBInet’s initial focus will be on the southwest border investments and areas between ports of entry that CBP has designated as having the highest need for enhanced border security due to serious vulnerabilities. In September 2006, CBP awarded an indefinite delivery/indefinite quantity systems integration contract for 3 years, with three additional 1-year options. The minimum dollar amount is $2 million; the maximum is stated as “the full panoply of supplies and services to provide 6,000 miles of secure U.S. border.” According to DHS, the SBInet solution is to include a variety of sensors, communications systems, information technology, tactical infrastructure (roads, barriers, and fencing), and command and control capabilities to enhance situational awareness of the responding officers. The solution is also to include the development of a common operating picture that provides uniform data, through a command center environment, to all DHS agencies and is interoperable with stakeholders external to DHS. See figure 1 for examples of existing technology along the border. Our statement will now focus on what type of information DHS has provided on explicit and measurable commitments relative to schedule and costs. The SBInet expenditure plan included general cost information for proposed activities and some associated milestone information. DHS estimates that the total cost for completing the acquisition phase for the southwest border is $7.6 billion for fiscal years 2007 through 2011. Of this total, approximately $5.1 billion is for the design, development, integration, and deployment of fencing, roads, vehicle barriers, sensors, radar units, and command, control, and communications and other equipment, and $2.5 billion is for integrated logistics and operations support during the acquisition phase for the southwest border. In addition, the SBInet expenditure plan and related documentation discussed generally how approximately $1.5 billion already appropriated will be allocated to SBInet activities (see table 1). For example, about $790 million is allocated for the Tucson Border Patrol sector and $260 million for the Yuma sector in Arizona. Table: 1 SBInet Funding Allocations Fiscal Years, 2005-2007 (dollars in thousands) Yma and Tcson tactical infrastrctre $325,000 $1,187,565 $1,549,365 nds were allocated to SBInet activities. des activities related to test and evalation, deployment and installation, and integrated logistics spport. The expenditure plan also includes certain milestones, such as starting and ending dates, for some but not all activities. For example, consistent with DHS’s task order approach to managing SBInet’s implementation, the task order for Project 28 includes detailed milestones. In other cases, such as the Tucson and Yuma sector activities, milestones and costs are preliminary and highly likely to change because they are still in the planning and requirement setting stage. According to SBInet officials, factors such as technological, environmental, and eminent domain constraints can affect the timetables and costs of these activities. Figure 2 illustrates the extent to which milestones were defined for selected activities. Despite including general cost information for proposed activities and some associated milestone information, the expenditure plan and related documentation did not include sufficient details about what will be done, the milestones involved, the performance capability expected, and the costs for implementing the program. For example, although the plan stated that about $790 million will be spent in the Tucson sector for such elements as fencing, ground sensors, radars, cameras, and fixed and mobile towers, the plan did not specify how the funds will be allocated by element and did not provide specific dates for implementation. According to DHS, each task order will define what will be done, when, the performance capability, and the total cost. The expenditure plan did not include costs incurred to date mainly because SBInet activities are in the early stages of implementation and costs had not yet been captured by DHS’s accounting system (e.g., the SBInet systems integration contract was awarded in September 2006 and the first two task orders were awarded in September and October 2006). Moreover, the expenditure plan did not include a baseline measure of miles under control of the border. While the plan did not discuss progress made to date by the program to obtain control of the border, related program documents, such as the bimonthly SBI reports to Congress, included information on the number of miles under control in the southwest border. Our statement will now focus on DHS’s use of federal acquisition requirements and related program management best practices. As of December 2006, SBInet was using several acquisition best practices. The extent to which these practices were in use varied, and outcomes were dependent on successful implementation. Specifically, SBInet was using several of the best practices or “Guiding Principles” in DHS Management Directive 1400, such as conducting a competition open to all qualified suppliers to award the systems integration contract, and using a performance-based approach where the agency identified the outcome it was seeking to achieve and allowed the competing companies to propose their specific solutions. SBInet plans to use a system for comparing costs incurred with progress achieved known as earned value management (EVM). However, we reported that the SBInet systems integration contract did fully satisfy an acquisition requirement to contain a specific number of units that may be ordered or a maximum dollar value. According to the Federal Acquisition Regulation (FAR), an agency may use an indefinite delivery/indefinite quantity contract, such as that used for SBInet, when it is not possible to determine in advance the precise quantities of goods or services that may be required during performance of the contract. Though these types of contracts are indefinite, they are not open-ended. The FAR requires that indefinite quantity contracts contain a limit on the supplies or services that may be ordered, stated in terms of either units or dollars. This limit serves a variety of purposes, including establishing the maximum financial obligation of the parties. According to DHS, the quantity stated in the contract, “6,000 miles of secure U.S. border,” is measurable and is therefore the most appropriate approach to defining the contract ceiling. We do not agree because the contract maximum used in the SBInet contract, “the full panoply of supplies and services to provide 6,000 miles of secure U.S. border,” does not allow anyone to calculate with certainty what the maximum financial obligation of the parties might turn out to be since the contract does not make clear the total amount of supplies or services that would be required to secure even 1 mile of U.S. border. In order to ensure that the contract is consistent with the FAR requirement, a maximum quantity or dollar value limit needs to be included in the contract. Managing major programs like SBInet also requires applying discipline and rigor when acquiring and accounting for systems and services, such as those requirements and practices embodied in OMB and related guidance. Our work and other best practice research have shown that applying such rigorous management practices improves the likelihood of delivering expected capabilities on time and within budget. In other words, the quality of information technology (IT) systems and services is largely governed by the quality of the management processes involved in acquiring and managing them. Some of these processes and practices are embodied in the Software Engineering Institute’s (SEI) Capability Maturity Models®, which define, among other things, acquisition process management controls that, if implemented effectively, can greatly increase the chances of acquiring systems that provide promised capabilities on time and within budget. Other practices are captured in OMB guidance, which establishes requirements for planning, budgeting, acquisition, and management of federal capital assets. As of December 2006, the SBInet program office had not fully defined and implemented critical acquisition processes, such as project planning, process and product quality assurance, measurement and analysis, and requirements management. To its credit, the program office has developed and begun implementing a draft risk management plan, dated September 29, 2006. The draft plan addresses, among other things, a process for identifying, analyzing, mitigating, tracking, and controlling risks. As part of this process, the program office developed a risk management database that identifies for each risk, among other things, the status, priority, probability of occurrence, the overall impact, consequence, and a mitigation strategy. The program office also established a governance structure, which includes a Risk Review Board (RRB) that is chaired by the SBInet Program Manager. According to the SBInet Risk Manager, a draft RRB charter has been developed. SBInet had not yet implemented other key management practices, such as developing and implementing a system security plan, employing an EVM system to help manage and control program cost and schedule, and following capital planning and investment control review requirements to help ensure that agencies investments achieve a maximum return on investment. According to a SBInet program office security specialist, as of December 2006, the program office had not developed a system security plan because it was too early in the system development life cycle. He stated that a plan is to be developed as part of the system certification and accreditation process. Regarding EVM, the program office is relying on the prime integrator’s EVM system to manage the prime contractor’s progress against cost and schedule goals. The prime integrator’s system has been independently certified as meeting established standards. However, the EVM system had not been fully implemented because, as of December 2006, the baselines against which progress can be measured for the two task orders that had been issued, as of early December (program management and Project 28) had not yet been established. According to program officials, these baselines were to be established for the program management task order and the Project 28 task order in mid-December 2006 and mid-January 2007, respectively. Further, it is unclear where SBInet is in the DHS capital planning and investment control review process. The SBInet expenditure plan did not describe the status of the program in the review process, but indicated that it is being managed using the DHS framework. However, a SBInet DHS Joint Requirements Council and Investment Review Board briefing document for fiscal year 2007, dated November 22, 2006, indicates that the program office plans to implement the review process on an annual basis. According to the SBInet Program Manager, SBInet projects are at various stages in the acquisition life cycle. As a result, the program office plans to combine multiple projects into a single decision milestone for purposes of investment review that is to occur on, at least, an annual basis. According to the SBInet Program Manager, the program had not fully defined and implemented critical management practices because priority was given to meeting an accelerated program implementation schedule. However, he stated that he was committed to putting these processes in place and further stated that the program plans to develop a plan for defining and implementing critical acquisition planning processes by the spring of 2007. Until the program office fully defines and implements these key program management practices, its efforts to acquire, deploy, operate, and maintain program capabilities will be at a higher risk of not producing promised performance levels and associated benefits on time and within budget. The SBInet PMO plans to execute SBInet activities through a series of concurrent task orders that will be managed by a mix of government and contractor staff. The PMO plans to nearly triple its current workforce, from about 100 to 270 personnel, by September 2007 in order to support and oversee this series of concurrent task orders. As of December 2006, SBInet personnel included 38 government employees and 60 contractor staff. By September 2007, personnel levels are projected to reach 113 government employees and 157 contractors. As of December 2006, SBInet officials told us that they have assigned lead staff for the task orders that have been awarded. However, SBI and SBInet officials expressed concerns about difficulties in finding an adequate number of staff with the required expertise to support planned activities. Staffing shortfalls could limit government oversight efforts. As shown in figure 2, SBInet’s acquisition approach calls for considerable concurrency among related planned tasks and activities. For example, according to DHS, lessons learned from the Project 28 task order are to be incorporated in the sector task orders. However, the task orders for the other sectors will be awarded prior to the completion and evaluation of Project 28. The program management task order is also to establish the capabilities to manage and oversee all of the other task orders. The risk of concurrency is further increased because, as discussed earlier, DHS does not have all the management processes in place to mitigate the risk and successfully manage the program. The greater the degree of concurrency among related and dependent program tasks and activities, the greater a program’s exposure to cost, schedule, and performance risks. SBI and SBInet officials told us that they understand the risks inherent in concurrency and are addressing these risks. However, as of December 2006, they had not provided evidence that identified the dependencies among their concurrent activities and that they were proactively managing the associated risk. The legislatively mandated expenditure plan for SBInet is a congressional oversight mechanism aimed at ensuring that planned expenditures are justified, performance against plans is measured, and accountability for results is ensured. We found that Congress and DHS are not in the best position to use the plan as a basis for measuring program success, accounting for the use of current and future appropriations, and holding program managers accountable for achieving effective control of the southwest border because the plan has not provided information on explicit and measurable commitments relative to the capabilities, schedule, costs, and benefits associated with individual SBInet program activities. Specifically, DHS needs to provide sufficient details on such things as planned activities and milestones, anticipated costs and staffing levels, and expected mission outcomes. DHS also needs to document that planned SBInet expenditures are justified, performance against plans is measured, and accountability for results is ensured. We recommended that DHS ensure that future expenditure plans include explicit and measurable commitments relative to the capabilities, schedule, costs, and benefits associated with individual SBInet program activities. DHS has not fully established the capabilities needed to effectively mitigate risks and to successfully manage the program. We reported that although the SBInet contract was generally competed in accordance with federal requirements, the contract does not fully satisfy federal regulations. Under the FAR, indefinite quantity contracts such as the SBInet contract must contain the specific number of units that may be ordered or a maximum dollar value. However, the SBInet contract merely contains the maximum number of miles to be secured. While SBInet officials consider this sufficient to satisfy the FAR requirement, a maximum quantity expressed in units other than the overall outcome to be achieved or expressed as a dollar value limit would help ensure that the contract is consistent with this requirement. We recommended that DHS modify the SBInet systems integration contract to include a maximum quantity or dollar value. DHS’s approach to SBInet introduces additional risk because the program’s schedule entails a high level of concurrency. With multiple related and dependent projects being undertaken simultaneously, SBInet is exposed to possible cost and schedule overruns and performance problems. Without assessing this level of concurrency and how it affects project implementation, SBInet runs the risk of not delivering promised capabilities and benefits on time and within budget. We recommended that DHS re-examine the level of concurrency and appropriately adjust the acquisition strategy. DHS generally agreed with our findings and conclusions, but did not agree with our assessment that the SBInet contract does not contain specific numbers of units that may be ordered or a maximum dollar value. In addition, DHS stated that CBP intends to fully satisfy each of the legislative conditions in the near future to help minimize the program’s exposure to cost, schedule, and performance risks. With respect to our recommendations, DHS concurred with two of our recommendations and disagreed with one. Specifically, DHS concurred with our recommendation for future expenditure plans to include explicit and measurable commitments relative to capabilities, schedule, costs, and benefits associated with individual SBInet program activities. According to DHS, future SBInet expenditure plans will include actual and planned progress, report against commitments contained in prior expenditure plans, and include a section that addresses and tracks milestones. DHS also concurred with our recommendation to re-examine the level of concurrency and appropriately adjust the acquisition strategy. In its written comments, DHS stated that CBP is constantly assessing the overall program as it unfolds, and adjusting it to reflect progress, resource constraints, refinements and changes in requirements, and insight gained from ongoing system engineering activities. DHS also stated that CBP recognizes the risk inherent in concurrency and has plans to address this risk. DHS did not agree with our recommendation to modify the SBInet integration contract to include a maximum quantity or dollar value. According to DHS, the quantity stated in the contract, “6,000 miles of secure U.S. border,” is measurable and is therefore the most appropriate approach to defining the contract ceiling. We do not agree. In order to ensure that the SBInet contract is consistent with the FAR, we continue to believe that it should be modified to include a maximum quantity, either units or a dollar value, rather than the total amount of miles to be secured. This concludes our prepared testimony. We would be happy to respond to any questions that members of the subcommittee may have. For questions regarding this testimony, please call Richard M. Stana at (202) 512-8816 or [email protected] or Randolph C. Hite at (202) 512-3439 or [email protected]. Other key contributors to this statement were William T. Woods, Director; Robert E. White, Assistant Director; Deborah Davis, Assistant Director; Richard Hung, Assistant Director; E. Jeanette Espínola; Frances Cook; Katherine Davis; Gary Delaney; Joseph K. Keener; Sandra Kerr; Raul Quintero; and Sushmita Srikanth. The SBInet December 2006 expenditure plan, including related documentation and program officials’ statements, satisfied four legislative conditions, partially satisfied four legislative conditions, and did not satisfy one legislative condition. The nine legislative conditions and the level of satisfaction are summarized in the table below. Satisfied This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony summarizes GAO's February 2007 report on SBInet, one element of the Department of Homeland Security's (DHS) Secure Border Initiative (SBI). SBInet is responsible for developing a comprehensive border protection system. By legislative mandate, GAO reviewed SBInet's fiscal year 2007 expenditure plan. This testimony focuses on (1) the extent that the plan provided explicit and measurable commitments relative to schedule and costs, (2) how DHS is following federal acquisition regulations and management best practices, and (3) concurrency in SBInet's schedule. GAO assessed the plan against federal guidelines and industry standards and interviewed program officials. SBInet's December 2006 expenditure plan offered a high-level and partial outline of a large and complex program that forms an integral component of the broader multiyear initiative. However, the SBInet expenditure plan, including related documentation and program officials' statements, lacked specificity on such things as planned activities and milestones, anticipated costs and staffing levels, and expected mission outcomes. This, coupled with the large cost and ambitious time frames, adds risk to the program. Without sufficient and reliable information on program goals, status and results, Congress and DHS are not in the best position to use the plan as a basis for assessing program outcomes, accounting for the use of current and future appropriations, and holding program managers accountable for achieving effective control of the border. As of December 2006, SBInet was using, at least to some extent acquisition best practices, but DHS had not fully established the range of capabilities needed to effectively mitigate risks and to successfully manage the program. To its credit, the SBInet contract was generally competed in accordance with federal requirements. However, the SBInet contract does not fully satisfy the federal regulatory requirement to specify a maximum dollar value or the number of units that may be ordered. We also reported that important management controls provided for in Office of Management and Budget (OMB) guidance and best practices were not yet in place, although the program manager stated that he was committed to doing so. Until they are in place, the program is at increased risk of failure. DHS's plan to execute SBInet activities through a series of concurrent task orders introduces additional risk. With multiple related and dependent projects being undertaken simultaneously, SBInet is exposed to possible cost and schedule overruns and performance problems. Without assessing this level of concurrency and how it affects project implementation, SBInet runs the risk of not delivering promised capabilities and benefits on time and within budget. SBI and SBInet officials told us that they understand the risks inherent in concurrency and are addressing these risks. However, as of December 2006, they had not provided evidence that identified the dependencies among their concurrent activities and that they were proactively managing the associated risk.
|
Network 9 (Nashville) is composed of a network office in Nashville, Tennessee; six medical centers located in three states; and 27 community based outpatient clinics. In fiscal year 2002, about 1 million veterans lived in the area served by the network. In that year, the six medical centers in the network treated about 208,000 patients or 20 percent of the veterans who lived in the area served by the network. (See table 1.) The largest medical center in the network is TVHS, which has two main locations— one in Nashville and the other in Murfreesboro, Tennessee. TVHS served more than twice as many patients and had more than three times the number of employees as the smallest medical center in the network in fiscal year 2002. For more detailed information on staff resources at TVHS’s two locations, which were integrated to form TVHS in fiscal year 2001, see appendix II. Network 9 (Nashville) has received increased allocations each year under VERA to provide resources for medical centers to treat their growing patient workload. From fiscal year 1997 to fiscal year 2002, the number of patients medical centers in the network treated increased by 27 percent. To meet patient health care needs, the network received $700 million in resources from VERA in fiscal year 1997, and by fiscal year 2002 the network’s allocations from VERA had risen to $849 million—a 21 percent increase. The network has been responsible for developing a method to allocate these VERA resources to its medical centers. VA headquarters provides general guidance to networks on the principles they should use when developing their allocation methodologies, but does not require that networks use patient workload or case mix in their allocation methodologies. Using fixed-capitation amounts for patient workload and case mix are guiding principles recognized by experts on the design of health care payment systems and implemented in practice by major health care programs such as Medicare and Medicaid. Medicare and Medicaid, for example, use fixed-capitation amounts to provide managed care plans with an incentive to operate efficiently by placing them at risk if their expenses exceed the payment amount. Our report on VERA in February 2002 also concluded that VERA provides a reasonable approach to resource allocation, in part because VERA allocates resources to the networks based primarily on the use of fixed-capitation amounts for patient workload and case mix. VERA provides fixed-capitation amounts for each case-mix category that are the same for each network and are intended to reflect VA’s average costs instead of historical local costs. In addition to resources that VA allocates to its medical centers from the network and headquarters, medical centers also collect other resources that they use in providing health care to veterans. VA medical centers collect third-party insurance payments and copayments from veterans. VA collects insurance payments for treatment of veterans’ conditions that are not a result of injuries or illnesses incurred or aggravated during military service. In addition, some veterans are charged copayments for certain health care services and prescription drugs obtained at a VA pharmacy. VA medical centers also collect resources for a variety of services VA provides to non-VA health care providers such as hospital laundry services and outpatient care provided to Department of Defense active duty military personnel. The six medical centers in Network 9 (Nashville) received about $1 billion in fiscal year 2002 from three sources: the network, VA headquarters, and resources from collections. (See table 2.) The network allocated the largest share of this total—83 percent or about $825 million of the total resources received by the six medical centers. VA headquarters allocated directly to the medical centers the next largest share, which was about 9 percent or $93 million of the total resources the network’s medical centers received. Finally, the six medical centers also collected about 7 percent of the total resources medical centers received or $73 million in resources from collections of third-party insurance payments, veteran copayments, and reimbursements primarily for services provided to non- VA healthcare providers. The amount of resources that the network, VA headquarters, and resources from collections provided, in total, to each medical center in fiscal year 2002 ranged from about $93 million for Huntington to about $291 million for TVHS. The network provided the largest portion of each medical center’s total resources in fiscal year 2002. Network allocations as a percentage of total medical center resources ranged from 82 percent at TVHS and two other medical centers to 86 percent at Mountain Home. TVHS and Lexington received the highest percentage of resources directly from VA headquarters (11 percent), and TVHS and Memphis received the lowest percentage of resources from collections (6 percent). The percentage of resources that medical centers in the network received in fiscal year 2002 from the three sources varied because of several factors. For instance, TVHS received a lower percentage of its resources from the network than three other medical centers, in part, because it received a larger percentage of its resources from VA headquarters than most medical centers in the network. The larger allocation from VA headquarters was used, in part, for the TVHS transplant program, the only one of its kind in the network. Louisville also received a lower percentage of its resources from the network than three other medical centers, in part, because the medical center received a higher percentage of its total resources from collections than any other network medical center. This resulted from agreements the medical center had—and resources it collected—for the delivery of outpatient and family practice care to active duty military personnel and their dependents at Ft. Knox, Kentucky. Medical centers in the network have relied on the network to provide most of their resources since VA changed its resource allocation system in fiscal year 1997. From fiscal year 1997 through fiscal year 2003, Network 9 (Nashville) allocated more than 80 percent of medical center resources each year. We estimate that on average the network provided 87 percent of the resources medical centers received during this period. Medical centers in Network 9 (Nashville) received most of their resources in fiscal year 2002 based on allocations using fixed-capitation amounts for patient workload and case mix. A large portion of the resources allocated on the basis of fixed-capitation amounts for patient workload and case mix came from the network and a smaller portion came from VA headquarters. The other resources that medical centers received in fiscal year 2002 were based on a variety of other factors such as network managers’ determination of the financial needs of medical centers during the course of the year. These resources came from the network, VA headquarters, and collections. Since VA changed its resource allocation system in fiscal year 1997, medical centers in Network 9 (Nashville) received about three-quarters of their resources based on fixed-capitation amounts and about one-quarter based on other factors each year from fiscal years 1997 through 2003. Medical centers received about 77 percent of their approximately $1 billion in total resources in fiscal year 2002—or $760 million—based on allocations using fixed-capitation amounts for patient workload and case mix. (See fig. 1.) The $760 million allocated on the basis of fixed-capitation amounts for patient workload and case mix came primarily from the network. The network allocated $742 million to medical centers on this basis. VA headquarters allocated the remainder of the resources based on fixed-capitation amounts for patient workload and case mix— $19 million—directly to medical centers in Network 9 (Nashville). The portion of medical center resources based on fixed-capitation amounts for patient workload and case mix was similar in other years. For each of fiscal years 1997 through 2003, we estimated that medical centers received about three-quarters of their resources based on fixed-capitation amounts for patient workload and case mix. The network allocated the largest portion of medical centers’ resources— $742 million—based on fixed-capitation amounts for patient workload and case mix in fiscal year 2002. To calculate its patient workload, the network, like VERA, used two methods. The network calculated the number of patients who received a relatively limited amount of health care during a previous 3-year period, and calculated the number of patients who received relatively more care during a previous 5-year period. In its workload calculation for this 3-year period, the network’s resource allocation methodology, like VERA, excluded a group of veterans, known as Priority 7 veterans, but included them in its 5-year workload calculation. The network made an exception in the way it calculated 3-year workload for a one-time $5 million allocation, its share of a supplemental appropriation VA received in fiscal year 2002. For this allocation the network included all Priority 7 veterans in its workload calculation. To calculate case mix in fiscal year 2002, the network classified patient workload into different categories, depending upon estimates of the patients’ health care needs and associated costs for treating them. The network, like VERA, used three case-mix categories: basic non-vested, basic vested, and complex. Basic non-vested and basic-vested categories included patients who have relatively routine health care needs and are principally cared for in an outpatient setting. Basic non-vested patients receive only part of their care through VA and are less costly to VA than basic-vested patients. Basic-vested patients, by contrast, rely primarily on VA for meeting their health care needs. Patients in the basic non-vested and basic-vested category represented about 97 percent of the network’s patient workload in that year. The complex category included patients who generally required significant high-cost inpatient care as an integral part of their rehabilitation or functional maintenance, and represented about 3 percent of the network’s workload in that year. For patients in each case-mix category, the network paid medical centers a capitation rate, which is based on the average cost of care in VA for a patient in that category. The capitation rates that the network used for each of these categories were the same as those used in VERA: basic non-vested ($197), basic vested ($3,121), and complex ($41,667). The network also allocated about $9 million to medical centers based on other patient case-mix categories. Medical centers in Network 9 (Nashville) with larger patient workloads generally received more resources than medical centers with smaller patient workloads. In fiscal year 2002, for example, TVHS had the largest patient workload and received the most resources. However, if two medical centers had similar patient workloads but the two had differences in the case mix of their patients, one may have received more resources than the other. For example, Mountain Home and Huntington medical centers had almost identical patient workloads in fiscal year 2002, but Mountain Home received a larger allocation from the network ($119 million) than Huntington ($78 million), in part, because of an important difference in their respective patients’ case mix. Mountain Home had more patients whose health care needs required more expensive care as indicated by the number of complex care patients. In that year, Mountain Home had almost 1,200 complex patients compared to 400 complex patients in Huntington. VA headquarters allocated the remainder of resources that medical centers received based on fixed-capitation amounts for patient workload and case mix in fiscal year 2002, which was about $19 million. The largest resource allocation VA headquarters made to medical centers in Network 9 (Nashville) on this basis—$13 million—was to pay a portion of the costs for veterans receiving care in state veterans’ nursing homes, which are operated in several locations in Network 9 (Nashville), including Murfreesboro, Tennessee and Hazard, Kentucky. VA paid the same amount for veterans receiving this service, about $53 per day per veteran, without adjusting for differences in veterans’ health care needs. The second largest resource allocation VA headquarters made to medical centers in Network 9 (Nashville) based on fixed-capitation amounts for patient workload and case mix in fiscal year 2002 was about $5 million for its transplant program. VA headquarters allocated these resources based on the number of patients needing transplants and the type of transplant needed: kidney, liver, heart, and bone marrow transplants. The capitation amounts for transplants ranged from $50,000 to $138,000 in fiscal year 2002. TVHS received all the VA headquarters transplant resource allocation in Network 9 (Nashville) because it is the only medical center in the network performing transplants. VA also allocated about $1 million to medical centers through a per diem rate per veteran to support housing programs for homeless veterans operated by nonprofit community-based organizations. Network 9 (Nashville) changed how it determined patient workload in fiscal year 2003 to allocate resources to its medical centers. For that year, the network calculated patient workload based on a 1-year period—or the total number of patients who used network medical centers in fiscal year 2002. In addition, the network included all veterans, including Priority 7 and 8 veterans, in its patient workload. According to a network official, the network made these changes in determining patient workload to better account for the costs involved in treating its patients. By contrast, in fiscal years 1997 through 2002, the network determined workload based on the same measures that VERA used by calculating the number of patients who received a relatively limited amount of health care during a previous 3-year period, and calculating the number of patients who received relatively more care during a previous 5-year period. And like VERA, the network also generally excluded Priority 7 veterans from its 3-year workload calculation but included them in its 5-year calculation from fiscal years 1997 through 2002. Network 9 (Nashville) also changed the way it calculated its case mix for allocating resources to medical centers several times during this period. In fiscal years 1997 and 1998, the network used the same 2 case-mix categories that VERA used—basic and special. In fiscal year 1999, the network did not use the 3 case-mix categories that VERA converted to in that year but instead used the 44 classes that VA used to construct VERA’s 3 case-mix categories. In fiscal years 2000 through 2002, the network used the 3 case-mix categories that VERA used: basic non-vested, basic vested, and complex care. In fiscal year 2003, the network made a significant change by increasing the number of case-mix categories from 3 used in fiscal year 2002 to 644 case-mix categories. The fiscal year 2003 case-mix approach classified the health care needs of hospital inpatients into the 511 diagnostic related groups (DRGs) used by Medicare to pay hospitals for inpatient care. For outpatient care, the approach used 121 different categories to classify the type of visit and account for the amount of resources the visit consumed. Additionally, the network used 12 different categories to measure the intensity of care in long-term care settings. According to a network official, these changes were made to better account for medical centers’ cost for treating patients. The Network 9 (Nashville) decision to use more case-mix categories in fiscal year 2003 is consistent with a recommendation we made to VA in February 2002 to improve VERA’s allocation of comparable resources for comparable workloads among networks. In that report, we recommended that VA adopt more case-mix categories to better account for differences in patient health care needs and that VA make other improvements. We also pointed out that the literature and experts we consulted suggested that a large increase in the number of case-mix categories—such as the increase in the number of Network 9 (Nashville) case-mix categories from 3 to 644 in fiscal year 2003—has advantages and disadvantages. Specifically, using more case-mix categories can increase the accuracy of health care resource allocations whether at the network or medical center level, but may also provide more opportunities to classify patients inappropriately to receive the highest capitation amounts. Medical centers in Network 9 (Nashville) received about 23 percent of their total resources, or $232 million, in fiscal year 2002 based on a variety of factors other than fixed-capitation amounts for patient workload and case mix. (See fig. 2.) These resources came from three sources: Network 9 (Nashville), VA headquarters, and collections in the amounts of $84 million, $75 million, and $73 million, respectively. In fiscal year 2002, Network 9 (Nashville) used a variety of factors to allocate $84 million to its medical centers. Using these factors, the network allocated $36 million for education and research support, $33 million for the network reserves, $14 million for equipment and nonrecurring maintenance, and $1 million for other purposes. To allocate $36 million in resources for education and research support, Network 9 (Nashville) used two methods. For education, the network allocated $22 million in resources to medical centers based on the number of residents at each medical center in the current academic year, the same approach that VERA used that year. For research support, the network allocated $14 million in resources to medical centers based primarily on the amount of funded research in fiscal year 2000, like VERA. To allocate the network’s reserve fund, network management allocated about $33 million in fiscal year 2002 based on the financial needs of medical centers. The network reserve fund was intended to provide resources for unexpected contingencies and cover unmet expenses that medical centers have during the course of a year. VA headquarters requires that all networks have such a fund, which is similar in concept to VERA’s reserve fund. Network officials told us while they encourage efficient operations, some medical centers have higher costs in certain areas and if these medical centers are unable to lower their costs, the network allocates funds from the reserve to help medical centers cover unmet expenses during the year. In fiscal year 2002, the network allocated reserve funds to medical centers for these purposes and distributed about half of the reserve fund to the Lexington medical center because of its higher than average costs in pharmacy, radiology, and laboratory expenses. Table 3 shows how the network distributed the network reserve to its six medical centers in fiscal year 2002. To allocate resources for equipment and nonrecurring maintenance, the network allocated about $14 million for that purpose in fiscal year 2002 based on priorities established by the chief engineers from each medical center and the network’s Executive Leadership Council (ELC). These groups prioritized a list of projects submitted by each medical center and the network allocated resources for projects according to these priorities. VERA, by contrast, allocated its equipment and nonrecurring maintenance resources to all networks that year based primarily on fixed-capitation amounts for patient workload. Two other factors accounted for a small portion of resources medical centers received or approximately $1 million. The network used other factors to control the amount of change in a medical center’s total network allocation from the prior year and for differences in local costs. In fiscal year 2002, the network capped net change in medical centers’ resources allocated by the network to a maximum of an 8 percent increase or decrease from fiscal year 2001 resource allocations. The caps were designed to prevent year-to-year fluctuations beyond management’s ability to prudently manage services. In addition, the network adjusted the amounts allocated to some medical centers relative to others to account for local price differences. These differences resulted primarily from variations in federal employee pay rates at the various medical centers in the network. VA headquarters directly allocated $75 million to medical centers for special programs such as prosthetics, stipends for medical residents and other trainees, and other programs based on a variety of other factors. In fiscal year 2002, VA allocated $34 million for prosthetics directly to medical centers based largely on medical centers’ historical expenditures for prosthetics, including items such as hearing aids, wheelchairs, and artificial limbs. VA headquarters also allocated $25 million that year to medical centers in the network to fund stipends for medical residents and other trainees based on the type and number of medical residents at each medical center. VA headquarters allocated about $16 million for other programs, including readjustment counseling, substance abuse, and post traumatic stress disorder (PTSD) based on a variety of other factors. Medical centers in Network 9 (Nashville) collected $73 million in resources from third-party insurance payments, copayments, and reimbursements for services provided to non-VA health care providers in fiscal year 2002. Medical centers in the network collected about $67 million of this amount from third-party insurance and copayments paid by veterans. Medical centers in the network also collected about $6 million in resources through reimbursements from the provision of health care services to non-VA entities such as private hospitals, the Department of Defense (DOD), and DOD’s civilian health care contractors in fiscal year 2002. Each medical center retained the resources it collected and had the flexibility to use these resources for any health care purpose. The amounts collected varied depending upon the priority status of veterans treated, whether their treatment was required for a service connected condition, whether the veteran had health insurance, and other factors. Expenditures made by the network office increased from $1 million in fiscal year 1997 to $23 million in fiscal year 2002. The two primary reasons for the $22 million increase were the consolidation of information technology and staffing expenditures. Information technology expenditures accounted for the largest increase in expenditures made by the network office. This increase occurred, in part, because the network assumed the cost of contracts for software licenses and information technology services for which medical centers had once been responsible, according to network officials. Instead of having each medical center contract for information technology services individually, the network took responsibility for these contracts to consolidate and negotiate lower costs. In fiscal year 2002, computer contracts, software licensing, and other information technology expenditures represented $9.6 million or approximately 41 percent of total network office expenditures. (See table 4.) Staff expenditures accounted for the second largest increase in expenditures made by the network office and accounted for $8 million by fiscal year 2002. Most of the increase in network office staff resulted because of growth in Mid South Customer Accounts Center (MCAC) staffing. (See table 5.) This growth occurred because the network consolidated staff positions formerly located at medical centers for medical insurance collections and claims processing at a central location and also added additional staff for this purpose. To establish this operation in fiscal year 1998, the network transferred 57 positions from the medical centers to MCAC. By fiscal year 2002, the network had added another 30 MCAC staff positions. MCAC staff expenditures in fiscal year 2002 were about $5 million. The MCAC operation is based at TVHS’s Murfreesboro location. Network officials told us they consolidated this operation to increase efficiency and improve oversight of collections and claims processing. From fiscal years 1997 through 2002, collections for third-party insurance payments and copayments increased from $28 million to about $67 million. Staff expenditures by the network office also increased because of growth in positions mandated by VA headquarters and additional staff positions that network management said would improve operations. These staff positions accounted for about $3 million in staff expenditures in fiscal year 2002. The network office added 5 positions from fiscal years 1997 through 2002 that were mandated by VA headquarters for all network offices to improve operations VA wide. These staff positions included a patient safety officer and a compliance officer. In addition, the network created 12 other network staff positions from fiscal years 1997 to 2002 that management expected to improve operations. For example, the network created a pharmacy benefits manager position to manage the network’s pharmaceutical budget, which, according to network officials, has brought down the increase in pharmaceutical costs for the entire network, and a Decision Support System (DSS) manager to oversee DSS activities. For a detailed description of all network office staff positions and their responsibilities for the network from fiscal years 1997 to 2002, see appendix III. In commenting on a draft of this report, VA agreed with our findings. VA provided technical comments which we incorporated, as appropriate. VA’s written comments are in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We will then send copies of this report to the Secretary of Veterans Affairs, interested congressional committees, and other parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix V. We reviewed Network 9 (Nashville) allocations to its medical centers for fiscal year 2002 to determine: (1) the amount of resources medical centers in the network received and the source of those resources, (2) the basis on which medical centers in the network received these resources, and (3) the extent to which network office expenditures were greater than in fiscal year 1997 and the primary reasons accounting for any increase. To place this information in context, we supplemented our findings for fiscal year 2002, the most recent year for which complete data were available at the time of our analysis, with information for fiscal years 1997 through 2003. We limited our review to how resources were allocated to medical centers in Network 9 (Nashville) and did not analyze how they spent their allocations to deliver health care. To determine the amount of resources medical centers in Network 9 (Nashville) received in fiscal year 2002 and the source of those resources we obtained financial data from the Office of the Chief Financial Officer within the Veterans Health Administration and from the Network 9 (Nashville) office. We categorized transactions in financial reports, referred to as medical center allotment reports, into the source of the resources: (1) Network 9 (Nashville), (2) VA headquarters, and (3) resources from collections. We identified transactions and summed the amount provided from each of the sources based on analysis of the medical centers’ allotment reports and interviews with VA headquarters and network officials. As part of resources allocated by the network, we also included the amount each medical center received in fiscal year 2002 from the network’s share of a supplemental appropriation that VA received, and the resources allocated for each medical center’s costs for Consolidated Mail Outpatient Pharmacy (CMOP) mail prescription services to veterans. In fiscal year 2002, medical centers in Network 9 (Nashville) had additional resources that they carried over from the prior fiscal year, because they were authorized to use certain resources for longer than 12 months. We did not include $25 million the medical centers carried over into fiscal year 2002, because the network had allocated these resources in the prior year. Information was available for resources allocated to all medical centers in medical center allotment reports except for the Tennessee Valley Healthcare System (TVHS) because TVHS’s allotment report also included resources allocated to the network office. To determine the amount of resources allocated to TVHS in fiscal year 2002, therefore, required additional analysis. Each network medical center was identified in the VA allocation system with a unique three-digit station number; however, TVHS and the network office shared the same station number, and as such, the VA allocation system combined their allotment data. To separate the TVHS and network office transactions, we obtained the fiscal year 2002 network office financial transfer report from TVHS. We separated each transaction on the combined network/TVHS allotment report, which allowed us to construct an allotment report for TVHS. We also obtained an internal allotment ledger from TVHS and network officials that documented fund transfers between the two, which were transacted outside the VA allotment system. Using our TVHS allotment report and the TVHS/network internal allotment ledger, we determined the amounts TVHS received through each funding source by applying similar calculations as with the other medical centers. This information was not available for TVHS’s Nashville and Murfreesboro locations after fiscal year 2000. However, information on staffing resources at these two locations was available after that year. See appendix II for our analysis of staffing information at the two locations. We estimated the percent of total medical center resources received from Network 9 (Nashville) for fiscal years 1997 through 2001 and 2003 to supplement our findings for fiscal year 2002. To develop these estimates, we used VA headquarters and network office data. To determine the amount of resources the medical centers received from the network we used VA information on the VERA allocations to Network 9 (Nashville) and network data on network office expenditures for these fiscal years. To estimate the total amount of resources the medical centers received through VA direct allocations in fiscal years 1997 through 2001 and in fiscal year 2003, we assumed it was the same percentage as in fiscal year 2002 when medical centers in the network received 3 percent of all funds VA headquarters allocated directly to all VA medical centers nationwide. To determine the amount that medical centers received through revenue collections in these years we relied on VA data. To obtain information on the basis on which the medical centers received resources, we interviewed network officials including the director, the chief financial officer, and TVHS officials. In addition, we obtained and analyzed documents that described the network’s allocation methodology and relied on our prior work on VERA. To determine the basis on which VA headquarters allocated resources directly to medical centers in the network, we interviewed officials in the Office of the Chief Financial Officer within the Veterans Health Administration. To determine how insurance collections and copayments as well as other resources were incorporated in allocations, we interviewed network officials, including the director of the Mid South Customer Accounts Center (MCAC). Based on our analysis of information we obtained from the network and VA headquarters, first we calculated the percentage of resources allocated on the basis of fixed-capitation amounts for patient workload and case mix in fiscal year 2002. We then subtracted this amount from the total resources medical centers received in fiscal year 2002 to determine the amount they received based on other factors. We estimated the percent of total resources received by all medical centers combined based on fixed-capitation amounts for patient workload and case mix for fiscal years 1997 through 2001 and 2003. To determine the total amount of resources allocated to the medical centers by the network based on fixed-capitation amounts, we used VA headquarters data on the amount of VERA allocations to Network 9 (Nashville) each year during this period. We then subtracted out expenditures made by the network office from data provided by the network. From this total, we subtracted out resources for allocations made to medical centers that were not based on patient workload and case mix. We obtained data on these allocations from VA headquarters, except allocations from the network reserve fund. We estimated network reserve funds for fiscal years 1997 through 2001 and 2003 by making the assumption that these funds represented 4 percent of all resources allocated to the network by VERA as in fiscal year 2002. To estimate the total resources medical centers in the network received directly from VA headquarters during this period we assumed it was the same percentage as in fiscal year 2002, when medical centers in the network received 3 percent of all funds VA headquarters allocated directly to all VA medical centers nationwide. We estimated the portion of these direct VA allocations to medical centers in the network that was based on fixed-capitation amounts for patient workload and case mix by assuming that during this period the portion was the same as in fiscal year 2002, when such resources amounted to 20 percent of VA headquarters’ direct allocations to the network. To determine the amount of resources collected for each medical center in the network during this period, we used information provided by the network and VA headquarters. To determine the extent to which network office expenditures were greater in fiscal year 2002 than in fiscal year 1997 and the primary reasons accounting for any increase, we analyzed reports on network office expenditures. Specifically, we analyzed expenditures made by the network office for fiscal year 2002 that were set aside from resources that the medical centers received. We also reviewed network office expenditures for information and technology, staffing, and other functions for fiscal years 1997 through 2002. We interviewed network officials to obtain the number of staff and their job titles and responsibilities from fiscal years 1997 through 2002. We interviewed the MCAC manager regarding the number of collections staff since fiscal year 1998, when the MCAC was created. We also contacted officials at VA headquarters to verify which staff positions were mandated by headquarters. As part of this analysis, we categorized staff into staff positions at MCAC and other network office staff positions, which included positions mandated by VA headquarters for all VA networks and those positions that Network 9 (Nashville) management established to improve operations. We included positions at the MCAC as network office positions because their salaries were paid from the same account as other network office staff and they were supervised by an official who reported to the network director. Throughout our review we examined the reliability of VA data and our use of those data. We discussed these data with VA headquarters and network officials to validate their accuracy. In addition, we discussed our methodology with VA headquarters and Network 9 (Nashville) staff who agreed that our approach and our assumptions were reasonable. Furthermore, we tested the consistency of VA allocation data by systematically comparing various types of data we obtained from several VA sources. For example, we verified the amount and source of transactions on the medical center allotment reports through interviews with network and VA headquarters officials and by matching these transactions with other financial reports obtained from VA. To better understand all of these issues, we conducted a site visit to interview officials at the network office located in Nashville and at the TVHS locations in Nashville and Murfreesboro, Tennessee. We performed our review from March 2003 through April 2004 in accordance with generally accepted government auditing standards. VA combined the Nashville and Murfreesboro medical centers to create a single integrated medical center—the Tennessee Valley Healthcare System (TVHS)—to improve veterans’ health care and gain efficiencies. In fiscal year 2000, the TVHS integration was announced and the first TVHS director was hired. Separate financial resource information was available for the Nashville and Murfreesboro locations before fiscal year 2001. The accounting systems of the two locations were merged in fiscal year 2001 and since then, information has not been available on the financial resources allocated separately to the Nashville and Murfreesboro locations. However, information on staffing at each location was available for fiscal year 2002 and staff salaries and benefits comprised over half of TVHS’s budget in that year. Overall staffing at each location declined since the integration, but trends varied by type of staff, such as administrative and medical center support staff and patient care staff. From fiscal year 2000 to fiscal year 2002, the TVHS patient workload increased while patient care staff remained about constant. Also, 125 other VA staff worked at the Murfreesboro location in fiscal year 2002, in addition to the staff at TVHS. Information was not available on financial resources allocated separately to Nashville and Murfreesboro after fiscal year 2000. Beginning in fiscal year 2001, Network 9 (Nashville) did not allocate resources to Murfreesboro and Nashville separately because they were combined as a single medical center, TVHS. Moreover, TVHS did not allocate resources to each location. Instead, TVHS allocated resources to the programs it operated across the two locations. As a result, the accounting systems did not reflect allocations by location. Overall, the number of staff declined at Nashville and Murfreesboro from fiscal year 2000 to fiscal year 2002. However, the amount of change varied by the type of staff. The number of staff at Nashville declined by 49, or about 4 percent, from fiscal year 2000 to fiscal year 2002. At Murfreesboro, the number of staff declined by 77, or about 7 percent, from fiscal year 2000 to fiscal year 2002. (See fig. 3.) Staffing trends varied by type of staff at both locations. Administrative and medical center support staff combined declined at both locations while patient care staff remained about constant. Administrative and medical center support staff include administrative, clerical, and wage rate staff who do not provide patient care-related work, such as secretaries and maintenance staff. At Nashville, the number of administrative and medical center support staff combined declined by 52, or 11 percent, from fiscal year 2000 to fiscal year 2002. At Murfreesboro, the number of administrative and support staff combined declined by 65, or 14 percent, from fiscal year 2000 to fiscal year 2002. (See fig. 4.) The largest decreases in administrative and medical center support staff are shown in table 6. The largest declines were in administrative and clerical staff. Smaller declines occurred among wage rate employees who are medical center support staff. There was very little change in patient care staff at both Nashville and Murfreesboro between fiscal year 2000 and fiscal year 2002. Patient care staff includes those who provide direct hands-on care to patients, such as doctors and nurses, as well as those staff who provide indirect care, such as pharmacists and laboratory technicians. The number of patient care staff at Nashville increased less than 0.5 percent from fiscal year 2000 to fiscal year 2002. The number of patient care staff at Murfreesboro decreased by almost 2 percent during the same time period. (See fig. 5.) The largest changes in patient care staff from fiscal year 2000 to fiscal year 2002 can be seen in table 7. The biggest increases were in nursing staff and the biggest declines were in nursing aides and assistants. The number of TVHS patients increased while the number of patient care staff remained about constant from fiscal year 2000 to fiscal year 2002. The 2002 by 7 percent. The number of patient care staff decreased less than 1 percent during the same time period. (See table 8.) In addition to TVHS staff, 125 other VA staff worked at Murfreesboro in fiscal year 2002. These staff consisted of Network 9 (Nashville) staff, staff working at the Consolidated Mail Outpatient Pharmacy (CMOP), the Office of Resolution Management, and the Veterans Benefits Administration. Table 9 shows the numbers and types of VA staff other than those who work for TVHS who work at the Murfreesboro location. The 95 Network 9 (Nashville) staff consisted of 8 office staff whose offices were located at Murfreesboro and 87 staff of the Mid South Customer Accounts Center (MCAC), which is responsible for insurance billing and collections for the network. These 87 staff were formerly located at medical centers within the network but were consolidated at the Murfreesboro location to increase the efficiency of collections. The CMOP had 28 VA staff in fiscal year 2002 (in addition to 155 contract staff) and provides mail prescription services to veterans. The CMOP at Murfreesboro is one of seven CMOPs across the country. VA’s Office of Resolution Management had 2 staff located at Murfreesboro in fiscal year 2002 and provided Equal Employment Opportunity (EEO) complaint processing services to VA employees, applicants for employment, and former employees. Finally, the Veterans Benefits Administration had a part-time staff person providing vocational rehabilitation and employment counseling at Murfreesboro in fiscal year 2002. We obtained information on staffing resources available at VA’s Nashville and Murfreesboro locations in fiscal year 2002 by interviewing Network 9 (Nashville) and TVHS officials. These officials told us that beginning in fiscal year 2001, information on financial resources allocated to Nashville and Murfreesboro separately was not available because these locations were combined as a single medical center, TVHS, in fiscal year 2001. However, information on staffing numbers and costs at each location was available and staff salaries and benefits constituted over half of TVHS’s fiscal year 2002 budget. Therefore, our scope was limited to a comparison of staffing numbers at each location in fiscal years 2000 and 2002. We obtained the number of staff positions and descriptions for each position for each location for fiscal years 2000 and 2002, reported by each staff member’s duty station. The number of staff positions was reported as the number of full time equivalent employees (FTEE). We analyzed the increase and/or decrease in staff positions between the 2 years by the type of staff. We obtained workload data for TVHS for fiscal years 2000 and 2002 and compared them with the number of patient care staff during those years. In addition, we interviewed TVHS officials to determine the number of other VA staff working at the Murfreesboro location in addition to those staff working for TVHS. Table 10 provides a brief description of the responsibilities for Network 9 (Nashville) office staff and the number of office staff positions filled from fiscal years 1997 through 2002. The table includes staff positions at the Mid South Customer Accounts Center (MCAC), positions mandated by VA headquarters for all networks, and other staff positions Network 9 (Nashville) created. In addition to the contact named above, Cheryl A. Brand, Linda C. Diggs, Krister Friday, Donald W. Morrison, and Thomas A. Walke made key contributions to this report. VA Health Care: Access for Chattanooga-Area Veterans Needs Improvement. GAO-04-162. Washington, D.C.: January 30, 2004. VA Health Care: Changes Needed to Improve Resource Allocation. GAO 02-685T. Washington, D.C.: April 30, 2002. VA Health Care: Changes Needed to Improve Resource Allocation to Health Care Networks. GAO-02-744T. Washington, D.C.: May 14, 2002. VA Health Care: Allocation Changes Would Better Align Resources with Workload. GAO-02-338. Washington, D.C.: February 28, 2002. VA Health Care: More Veterans Are Being Served, but Better Oversight Is Needed. GAO/HEHS-98-226. Washington, D.C.: August 28, 1998. VA Health Care: Resource Allocation Has Improved, but Better Oversight Is Needed. GAO/HEHS-97-178. Washington, D.C.: September 17, 1997. Veteran’s Health Care: Facilities’ Resource Allocations Could Be More Equitable. GAO/HEHS-96-48. Washington, D.C.: February 7, 1996. VA Health Care: Resource Allocation Methodology Has Had Little Impact on Medical Centers’ Budgets. GAO/HRD-89-93. Washington, D.C.: August 18, 1989. VA Health Care: Resource Allocation Methodology Should Improve VA’s Financial Management. GAO/HRD-87-123BR. Washington, D.C.: August 31, 1987.
|
Since fiscal year 1997, the Department of Veterans Affairs (VA) has relied primarily on its 21 health care networks to allocate resources to its medical centers. VA headquarters also directly allocates some resources to the medical centers. In addition, medical centers collect resources from third-party insurance payments and other sources. VA provides general guidance to\ networks for resource allocation to medical centers, but permits variation in networks' allocation methodologies. Representatives from veterans groups and others have expressed concerns regarding resource allocations to medical centers in Network 9 (Nashville) known as the Mid South Healthcare Network. GAO was asked to report for fiscal year 2002 (1) the amount of resources medical centers in the network received and the source of those resources and (2) the basis on which medical centers in the network received these resources. GAO was also asked to supplement findings for fiscal year 2002 with information for fiscal years 1997 through 2003. The six medical centers in Network 9 (Nashville), known as the Mid South Healthcare Network, received a total of about $1 billion in resources in fiscal year 2002. The network allocated 83 percent of the total, or $825 million, to its medical centers. The medical centers received smaller amounts from VA headquarters (9 percent of the total or about $93 million) and resources from collections (7 percent of the total or about $73 million). As in fiscal year 2002, the network allocated more than 80 percent of medical center resources each year from fiscal years 1997 through fiscal year 2003. Medical centers in Network 9 (Nashville) received about 77 percent of their resources, or $760 million, in fiscal year 2002 based on fixed-per-patient amounts, referred to as fixed-capitation amounts, for patient workload and case mix. Patient workload is the number of patients treated, and case mix is a classification of patients into categories based on health care needs and related costs. The largest portion of these resources allocated on this basis came from the network while a smaller portion came from VA headquarters. Medical centers in the network received about 23 percent of their total resources, or $232 million, in fiscal year 2002 based on a variety of other factors such as network managers' determination of the financial needs of medical centers during the course of the year. These resources came from the network, VA headquarters, and collections. Since VA changed its resource allocation system in fiscal year 1997, the medical centers in the network received about the same portions of their resources based on fixed capitation amounts and on a variety of other factors each year from fiscal years 1997 through 2003. VA agreed with GAO's findings.
|
In December 2010, the QDDR recommended the creation of the CT Bureau, to supersede the CT Office. State elevated the CT Office to the CT Bureau in January 2012. According to State, one reason for elevating the CT Office to a bureau was that the office’s responsibilities for counterterrorism strategy, policy, operations, and programs had grown far beyond the original coordinating mission. In the transition from CT Office to CT Bureau in 2012, some initial organizational changes occurred, such as a reduction from five to four Deputy Coordinators who oversee counterterrorism issue areas within the bureau as well as the creation of an executive office to provide management support to the bureau. The initial organizational changes also elevated the role of strategic planning and metrics and established a new policy and guidance unit. Our preliminary information shows that additional changes to the CT Bureau’s organizational structure occurred starting in 2014, after the current Ambassador was confirmed as the Coordinator for Counterterrorism in February 2014. According to bureau officials, the Ambassador initiated a strategic review of the bureau’s programs and what they were accomplishing to help form a clear picture of priorities, threats, and where the bureau’s efforts and funding should be directed. The strategic review, which was completed in November 2014, led to a reorganization of the bureau and a shift in overall focus to a regional or geographic approach. As a result of the strategic review, the portfolio of the CT Bureau’s Office of Programs has changed to reflect a more regional approach rather than an approach based on funding streams. According to CT Bureau officials, the shift is intended to encourage and facilitate cross-bureau discussions across the entire CT Bureau. Specifically, the portfolios of program officials have been broadened by requiring a cross-cutting look at programs across their assigned region. Figure 1 shows how the CT Office has evolved over the last two decades. Our preliminary information shows other changes to the bureau’s organizational structure stemming from the strategic review, such as the changes in names of directorates and offices, their portfolios, or both to better reflect the new strategic approach and priorities of the bureau. For example, the portfolio for the multilateral affairs office was shifted and combined with the portfolio for the regional affairs office. In addition, a new office and two new units were created: (1) the Office of Strategy, Plans, and Initiatives; (2) the Foreign Terrorist Fighters Unit; and (3) the Countering Violent Extremism Unit. Appendix I depicts the organizational structure of the CT Bureau, as of May 2015. The CT Bureau manages a range of programs and activities to assist partner nations around the world to combat terrorism, primarily through the following six programs: Antiterrorism Assistance: in partnership with the Bureau of Diplomatic Security as the primary implementer, provides U.S. government antiterrorism training and equipment to law enforcement agencies of partner nations. Countering Violent Extremism (CVE): entails programs and activities that work with partner nation civil society sectors and governments to undermine terrorist ideology and to address the underlying local grievances that drive at-risk individuals into violent extremism. Counterterrorism Engagement: entails programs and activities to build political will for counterterrorism at senior levels in partner nations. Counterterrorism Finance (CTF): entails programs and activities to build foreign partner capacity and to implement significant parts of the U.S. government’s strategy to cut off financial support to terrorists. Terrorist Interdiction Program: provides the immigration and border control authorities of partner nations with a computer database system that enables identification of suspected terrorists attempting to transit air, land, or sea ports of entry. Regional Strategic Initiative: meets transnational terrorist threats with regional responses coordinated by each region’s U.S. ambassadors in the field. Our preliminary analysis shows that from fiscal years 2011 through 2014, the CT Bureau was allocated a cumulative total of $539.1 million for these six counterterrorism-related programs, as shown in figure 2.of these allocations are from the Nonproliferation, Antiterrorism, Demining, and Related Programs account, which funds all six programs. Allocations from the Economic Support Fund support those Countering Violent Extremism and Counterterrorism Engagement program activities that do not involve law enforcement entities. Our preliminary analysis shows that, in addition to the foreign assistance programming that the CT Bureau oversees and manages, the bureau’s allocated resources include funding for the operations of the bureau. The CT Bureau receives funds from two sources to fund its core operations: the Diplomatic and Consular Programs and the Worldwide Security Programs accounts. Figure 3 shows our preliminary analysis of the bureau’s total allocations for its overall operations since fiscal year 2012. These allocations increased from $11.7 million in fiscal year 2012 to $14.7 million in fiscal year 2013, as the bureau was being established. The allocations then decreased to $13.1 million in fiscal year 2014. Our preliminary analysis indicates that the CT Bureau’s number of authorized full-time equivalent (FTE) positions has grown annually, and the bureau has recently undertaken efforts to reduce a persistent staffing gap. The bureau’s number of FTEs grew from 66 in fiscal year 2011 to 96 in fiscal year 2015, which is an increase of more than 45 percent. Figure 4 shows the number of FTEs within the bureau for fiscal years 2011 to 2015, along with the number of positions that were filled. While the bureau’s current authorized level of FTEs for fiscal year 2015 is 96 positions, it had 22 vacancies as of October 31, 2014.analysis also shows that the percentage of vacancies in FTE positions in the bureau has ranged from 17 percent to 23 percent in fiscal years 2011 to 2015. According to the CT Bureau, these vacancies have included both staff-level and management positions. As of the end of May 2015, the number of FTE vacancies in the bureau had been reduced to 10 positions, most of which are in the Office of Programs, according to the CT Bureau. The CT Bureau utilized various means to assess its performance in fiscal years 2011 through 2014, including performance assessments and program evaluations. Our preliminary analysis indicates that the CT Bureau assessed its progress toward its foreign assistance-related goals but has not established time frames for addressing recommendations from program evaluations. Our preliminary analysis shows that the CT Bureau assessed its progress toward achieving its foreign assistance-related goals in fiscal years 2012 and 2013, as required by State policy. That policy requires bureaus to respond to an annual department-wide data call for foreign assistance- related performance information. Specifically, bureaus must identify indicators and targets for their foreign assistance-related goals, as defined in their multiyear strategic plans, and report results achieved toward each indicator for the prior fiscal year.CT Bureau identified four foreign assistance-related goals in its first multiyear strategic plan and established quantitative indicators and As shown in table 1, the corresponding targets for each of those goals. It also reported results achieved for each indicator. In addition to having assessed its progress toward achieving its foreign assistance-related goals, our preliminary analysis shows that since being elevated to a bureau in fiscal year 2012, the CT Bureau has completed four evaluations of counterterrorism-related programs it oversees. The number of completed evaluations meets the number of evaluations required by State’s February 2012 evaluation policy. As shown in table 2, the CT Bureau completed these evaluations during fiscal years 2013 and 2014 and focused primarily on evaluating programs providing training courses to law enforcement officials of partner nations, such as the Antiterrorism Assistance program in Morocco and Bangladesh. CT Bureau officials noted that, when deciding what programs to evaluate, the bureau took into consideration whether the evaluation would inform the priority programming and objectives of the bureau and produce results the bureau could use in future programming decisions and evaluation designs. To date, the CT Bureau has not evaluated the CVE program, which has been identified as a priority goal for the bureau. Our preliminary analysis indicates that the CT Bureau has not established time frames for addressing recommendations from program evaluations. The four program evaluations the CT Bureau completed during fiscal years 2013 and 2014 resulted in 60 recommendations; however, according to bureau officials, the bureau does not have a system for assigning time frames for the implementation of recommendations. The officials said program officers are assigned responsibility for following up on recommendations that impact their portfolio; however, the bureau does not have any policy or other guidance outlining the timing for addressing recommendations from evaluations. In response to questions during the course of our review, CT Bureau officials developed action plans to describe the status of efforts to address the 60 recommendations.the basis of our review of these action plans, the CT Bureau reported having implemented about half of the recommendations (28 of 60) made On in the evaluations, as of April 2015. The bureau had put on hold or decided not to implement 4 recommendations; the remaining 28 were still being considered or were in the process of being implemented, or the bureau had made a commitment to implement them. While the action plans are a positive first step to help the bureau monitor and track its progress in implementing recommendations, they do not address the need for the bureau to establish time frames for addressing recommendations from evaluations. Without specific time frames for completing actions in response to recommendations from evaluations, it may be difficult for the bureau to ensure that needed programmatic improvements are made in a timely manner or to hold its implementing partners accountable for doing so. Our preliminary analysis shows that activities between the CT Bureau and other bureaus within State as well as with other U.S. government agencies on counterterrorism programs, specifically the Countering Violent Extremism (CVE) and Counterterrorism Finance (CTF) programs, were generally consistent with key practices that GAO has identified for interagency collaboration in the areas of (1) outcomes and accountability, (2) bridging organizational cultures, (3) leadership, (4) clarity of roles and responsibilities, (5) resources, and (6) written guidance and agreements. Outcomes and accountability. GAO-12-1022. Having defined outcomes and mechanisms to track progress can help shape a collaborative vision and goals. specific requests across regional or functional bureaus or messages defining and assigning specific tasks. We also identified accountability mechanisms to monitor, evaluate, and report on results or outcomes of counterterrorism programming. Bridging organizational cultures. Our preliminary analysis shows that while terminology may differ when discussing CVE, within State, some regional and functional bureau officials we spoke with said that they use a common definition for CVE and apply the CVE strategy and policy that the CT Bureau has developed for CVE programming. Similarly, some officials in other U.S. government agencies told us they agree on common terms and outcomes of counterterrorism programming as ideas are discussed between the CT Bureau and the implementing agency, if the bureau funds a program or grant. Our preliminary analysis also shows that there was frequent communication among collaborating agencies, including as it relates to CVE programs. Specifically, we found that frequency of communication between the CT Bureau and other State bureaus as well as other U.S. government agencies varied depending on the project or activity and ranged from daily to monthly interactions. Leadership. Our preliminary analysis shows that for CVE and to some extent CTF, officials at State and other U.S. government agencies were generally aware of the agency or individual with leadership responsibility for the particular counterterrorism program. Officials in State’s regional bureaus stated that they are generally aware of when the CT Bureau would have the lead on counterterrorism issues versus the regional bureaus. In addition, officials noted that they receive relevant and timely information on CVE-related programming from the bureau. For the CTF program, our preliminary analysis indicates that there was some uncertainty among officials as to whom they should be working with on CTF programming, due to the recent reorganization of the CT Bureau. Clarity of roles and responsibilities.shows that there was general clarity on the roles and responsibilities of the participants collaborating on CVE and CTF programs with the CT Bureau. For example, several State officials mentioned that for questions related to programs, such as CVE, they knew their point of contact in the CT Bureau and also what that person’s portfolio encompassed. Resources. Our preliminary analysis indicates that, in cases where the CT Bureau funded U.S. government agencies on CVE or CTF programming, the funding mechanism was clear and laid out in the interagency agreements. Some agency officials told us that these agreements provide a standard process for providing funding from the CT Bureau to other agencies. Written guidance and agreements.shows that many of the agencies we spoke with had formal interagency agreements with the CT Bureau on CVE- or CTF-related programming or activities. The agreements described, among other things, the service to be provided, roles and responsibilities of each party, method and frequency of performance reporting, and accounting information for funding of the service provided. We found that most of the State bureaus we spoke with that coordinate with the CT Bureau on CVE and CTF programs did not have written agreements laying out the terms of the collaboration, but several State officials said that formalized agreements were not necessary because collaboration between bureaus within State is routine and the CT Bureau has been effective in sharing information pertaining to the CVE program. Thank you again for the opportunity to assist with the oversight of State’s Bureau of Counterterrorism. Chairman Poe, Ranking Member Keating, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information regarding this statement, please contact Charles Michael Johnson, Jr., Director, International Affairs and Trade at (202) 512-7331 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Jason Bair, Assistant Director; Andrea Riba Miller, Analyst-in Charge; and Esther Toledo. Technical support was provided by Ashley Alley, Mason Calhoun, Tina Cheng, David Dayton, Martin De Alteriis, and Sarah Veale. Appendix I: Organizational Chart of the Department of State Bureau of Counterterrorism, as of May 2015 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Terrorism and violent extremism continue to pose a global threat, and combating these at home and abroad remains a top priority for the U.S. government. In 2010, the first Quadrennial Diplomacy and Development Review (QDDR), conducted at the direction of the Secretary of State, highlighted these global threats and, among other actions, recommended that State's Office of the Coordinator for Counterterrorism be elevated to bureau status. According to the 2010 QDDR report, the elevation of this office to a bureau would enhance State's ability to, among other things, counter violent extremism, build foreign partner capacity, and engage in counterterrorism diplomacy. In addition, the report stated that the bureau's new status would enable more effective coordination with other U.S. government agencies. On the basis of preliminary results of ongoing work that GAO is conducting for this subcommittee and other congressional requesters, this testimony provides observations on (1) how the bureau's staffing resources have changed since 2011, (2) the extent to which the bureau has assessed its performance since 2011, and (3) the extent to which the bureau's coordination with U.S. government entities on select programs is in line with key collaboration practices. To conduct this work, GAO reviewed and analyzed State and other U.S. government agency information and interviewed U.S. government officials in Washington, D.C. GAO expects to issue a final report on this work in July 2015, along with any related recommendations. GAO's preliminary analysis shows that the Department of State's (State) Bureau of Counterterrorism has had an annual increase in authorized full-time equivalent (FTE) positions since fiscal year 2011 and has recently undertaken efforts to reduce a persistent staffing gap. The number of FTEs for the bureau increased from 66 in fiscal year 2011 to 96 in fiscal year 2015, and over the same period the percentage of FTE vacancies ranged from 17 to 23 percent. The vacancies have included both staff-level and management positions. During GAO's ongoing work, the bureau indicated that the gaps between authorized and filled positions were due to several factors. These included an increase in FTEs that the bureau was authorized when it was established and postponement of some staffing decisions until the Coordinator for Counterterrorism, who assumed her position in 2014, had sufficient time to assess the bureau's needs and priorities. The bureau has recently made progress in filling vacant positions and reported having 10 FTE vacancies as of the end of May 2015. GAO's preliminary analysis has found that the bureau assessed its progress toward achieving its foreign assistance-related goals but has not established time frames for addressing recommendations from program evaluations. Specifically, the bureau established indicators and targets for its foreign assistance-related goals identified in the bureau's first multiyear strategic plan, and it reported results achieved toward each indicator. Since its elevation to a bureau in fiscal year 2012, the bureau has also completed four evaluations of counterterrorism-related programs it oversees, resulting in 60 recommendations. GAO's preliminary results show that the bureau had addressed about half of the recommendations (28 of 60) as of April 2015 but had not established time frames for addressing the remaining recommendations. GAO's preliminary analysis has also found that the bureau's coordination within State and with other federal agencies on the Countering Violent Extremism and Counterterrorism Finance programs generally reflects key practices for collaboration. For example, with regard to identifying resources, in cases where the bureau funded other U.S. agencies partnering on these programs, the funding mechanism was clear and laid out in interagency agreements.
|
Congress authorized the Emergency Relief program in Title 23, United States Code, Section 125, to provide for the repair or reconstruction of federal-aid highways and roads on federal lands that have sustained serious damage resulting from natural disasters or catastrophic failures from an external cause. Natural disasters such as floods, hurricanes, earthquakes, tornadoes, tsunamis, severe storms, or landslides all potentially qualify under the program. Catastrophic failure refers to the sudden and complete failure of a major element or segment of the highway system that causes a disastrous impact on transportation. This is a long- established federal function—Congress has provided funds for this purpose since at least 1928, and an Emergency Relief program has existed since 1956. The program supplements the resources of states and federal agencies to help pay for unusually heavy expenses that result from extraordinary conditions. The program provides states, and Puerto Rico, the District of Columbia, and territories, with funding above and beyond their regular federal-aid highway funding. FHWA’s division offices in each state administer the program, and states implement the projects. The division offices process state highway agencies’ applications for funding and make decisions on the eligibility of specific projects. Regulations currently define eligible disasters as those where the cost of damage would exceed $700,000 in program assistance in any state for a given disaster. The $700,000 threshold includes the damage cost for all damage sites resulting from the disaster. According to FHWA guidance, each prospective damage site must have at least $5,000 of repair costs to qualify for funding—-a threshold intended to distinguish emergency relief work from maintenance. By law, FHWA can provide a state with up to $100 million in Emergency Relief funding for each natural disaster found eligible for funding. However, Congress has passed special legislation lifting this cap for specific disasters. The Emergency Relief program is currently authorized at $100 million annually out of the Highway Trust Fund, and FHWA allocates these funds to states based on the states’ proportion of the total costs of all eligible projects. For example, if a state had 10 percent of the total estimated reimbursable costs for all Emergency Relief projects nationwide, that state would receive 10 percent of the available Emergency Relief funds. As with other FHWA programs, funding is provided to the states on a reimbursable basis. If Emergency Relief funds are not available, states may use other appropriate federal-aid program funds to initially pay for projects while awaiting reimbursement from the Emergency Relief program. The program’s regulations make a distinction between emergency and permanent repairs. Emergency repairs are to quickly restore essential highway traffic service and protect remaining facilities, and include such things as debris removal, construction of detours, regrading, and temporary structures. Permanent repairs restore seriously damaged highway facilities to predisaster conditions. In some instances, such as the destruction of a bridge, the complete replacement of the facility may be needed. In these cases the facility would be rebuilt to current design standards. By statute, the Emergency Relief program may fund up to 100 percent of emergency repair project costs within the first 180 days following the disaster. The program funds permanent repair projects, and emergency repair project costs after the first 180 days, at the percentage normally provided for work on that type of federal-aid highway. For example, the federal share for interstate highway projects is 90 percent of the cost, and the federal share for most other federal projects is 80 percent. Emergency Relief program regulations state that the program is not intended to fund the correction of preexisting nondisaster-related deficiencies. Additionally, the program is not intended to pay for “betterments” that change the function or character of the highway facility, such as expanding the capacity of roads. However, betterments are eligible for program funding if they pass a benefit-cost test that weighs their cost against the prospective cost to the Emergency Relief program for future repairs. Additionally, where it is not feasible to repair or replace an existing highway facility at its existing location, an alternative selected through the National Environmental Policy Act (NEPA) process, if comparable to the destroyed facility, is eligible for Emergency Relief funding. Except when betterments are justified, or when a relocation results from the NEPA process, program regulations state the cost of a project eligible for Emergency Relief may not exceed the cost of repair or reconstruction of a comparable facility. In addition to providing funds to the states, the Emergency Relief program also provides funding for the repair of roads on federal lands through the Emergency Relief for Federally-Owned Roads program. This program is intended for unusually heavy expenses associated with the repair and reconstruction of federal roads and bridges seriously damaged by a natural disaster or a catastrophic failure. FHWA’s Federal Lands Highway Division maintains, through interagency agreements, oversight of the Emergency Relief funds for projects administered by various federal agencies, including the Department of Defense, Army Corps of Engineers, U.S. Forest Service, National Park Service, Fish and Wildlife Service, Bureau of Reclamation, Bureau of Land Management, and Bureau of Indian Affairs. The program may fund 100 percent of the cost of repairs to federal roads. FHWA’s Emergency Relief program is one of a number of federal programs and activities that provide major disaster and emergency assistance to states and local governments. The Robert T. Stafford Disaster Relief and Emergency Assistance Act primarily establishes the programs and processes for the federal government to provide major disaster and emergency assistance—upon a governor’s request, the President can declare an “emergency” or a “major disaster” under the Stafford Act, triggering various emergency response activities such as debris removal, temporary housing assistance, and the distribution of medicine, food, and other consumables. The Federal Emergency Management Agency (FEMA), an agency of the Department of Homeland Security (DHS), is the agency responsible for administering the Stafford Act. As part of its responsibilities, FEMA provides funds to state and local governments to repair and replace roads damaged as a result of disasters that are not on the federal-aid highway system. Funding for FEMA disaster relief is drawn from the General Fund of the Treasury. During the 10-year period 1997 through 2006, FHWA has allocated over $8 billion to the states, the District of Columbia, Puerto Rico, U.S. territories, and other federal agencies to repair or replace highway facilities damaged by natural or man-made events. Of this total, 70 percent has gone to five especially hard-hit states that have experienced extraordinary or multiple disasters—California, Florida, Louisiana, Mississippi, and New York. For the 9-year period from 1998 through 2006, the time frame for which FHWA has data on individual disaster events, these very large events account for most of the financial demands on the program, a total of about $4.1 billion of the $6.6 billion allocated in that time frame. In addition, the large number of smaller events that occurred each year accounted for about $2.4 billion in demands since 1998. For some states that have experienced major or repeated disasters, the Emergency Relief program has provided a significant amount of funding. This funding has been generally concentrated in a small number of states. During the 10-year period 1997 through 2006, FHWA has allocated over $8 billion to states. (See app. II for a detailed list of state Emergency Relief allocations). Of this amount, about 70 percent of all Emergency Relief allocations went to five states—California (about $1.4 billion, or 18 percent), Florida (about $1.6 billion, or 20 percent), Louisiana (about $1.2 billion, or 15 percent), Mississippi (about $1 billion, or 13 percent) and New York (about $352 million, or 4 percent). (See fig. 1.) Since the beginning of the program, all 50 states, the District of Columbia, Puerto Rico, and U.S. territories all have received some FHWA Emergency Relief funds. The majority of Emergency Relief program funding for the 9-year period of 1998 through 2006, the time frame for which FHWA had data on individual disaster events, has gone to 5 states as a result of series of extraordinary disasters including the World Trade Center terrorist attacks, Florida’s 2004 hurricanes, and Hurricanes Katrina and Rita, among others. These very large events have each totaled from over $100 million to over $1 billion, as figure 2 illustrates. These very large disasters can be considered extraordinary events in the context of the Emergency Relief program because each of them exceeded the $100 million annual program authorization. Also, individual events exceeding $100 million in Emergency Relief allocations to a state require congressional legislation that exempts the state from the statutory limitation that no state may receive more than $100 million in Emergency Relief funds in 1 year for any single event. Over time, these individual extraordinary disasters have placed greater financial demands on the Emergency Relief program than the numerous smaller eligible events that occur each year. During the 9-year period 1998 through 2006 extraordinary events resulted in about $4.14 billion in allocations to states (see table 1). Over the same period, smaller events, those requiring less than $100 million, required about $2.44 billion in emergency relief funding, or an average of $271 million per year. The allocations needed for smaller events may be thought of as a baseline cost for the program, the amount that was needed assuming no extraordinary event occurred. Because the program’s annual authorization was set at $100 million during this period, the annual funding covered about 37 percent of what may be considered the baseline costs of the program during this period. Finally, another measure of the program’s funding need is the average allocation per individual disaster event. Under FHWA’s classification, events are defined as disasters causing a federal share of at least $700,000 damage to a state, with each state counted separately. Thus, Hurricane Katrina, which reached this level of damage for four states, counts as four events for the program, one each for Alabama, Florida, Louisiana, and Mississippi. From 1998 through 2006, the number of events per year varied from 13 to 47, and the median allocation per event was about $3.7 million. Appendix III provides a detailed list of event allocations from fiscal years 1998 through 2006. In recent years, annual demands on the Emergency Relief program have exceeded the $100 million annual authorization, resulting in a long-term fiscal imbalance and reliance on supplemental appropriations. More specifically, on average the program’s needed allocations for ordinary events—disaster events requiring under $100 million in federal funding— are 2.7 times the annual authorization. One reason for this funding shortfall is the program’s static funding level, which has remained the same since 1972. Since 1990, the program has often relied on supplemental appropriations to make up for the funding shortfall, but because these supplemental funds are not provided on an annual basis, the program has experienced a fiscal imbalance, resulting in funding reimbursement backlogs that have placed a burden on some states. Furthermore, demands for Emergency Relief program funding may place a burden on the Highway Trust Fund, unless alternative funding is used. Despite the program’s long- term fiscal imbalance and a depleting Highway Trust Fund, FHWA is not recapturing unused program funds. FHWA has allocated over $8 billion between fiscal years 1997 and 2006 to meet annual demand for the Emergency Relief program. This is an average of over $800 million a year for all events, which is significantly more than the program’s $100 million annual authorization. Funding needs for extraordinary events—those events needing more than $100 million in funding—have averaged about $460 million annually since 1998, the earliest year for which FHWA has data on individual disaster events. Furthermore, annual demand for ordinary events—those events totaling less than $100 million—is also more than the $100 million annual authorization. As mentioned earlier, for fiscal years 1998 through 2006 the average annual funding need for ordinary events was $271 million (see fig. 3). This has resulted in an annual deficit between program demands and program funding. One reason for the shortfall between program funding and demand is the program’s static annual authorization. The Emergency Relief program has been funded with an annual authorization of $100 million through contract authority from the Highway Trust Fund, with a $100 million per event obligation limit imposed since 1972. However, after adjusting for inflation, the value of the annual authorization has decreased significantly over time, resulting in program demands exceeding annual program funding. The fiscal year 2005 authorization of $100 million is the equivalent of $26.4 million in 1972 dollars (see fig. 4). Stated differently, the $100 million annual authorization initiated in 1972 would need to be increased to over $378 million to have the same value in real (2005) dollars. Funding at the $378 million level would be more than sufficient to pay for the average annual cost of ordinary events from fiscal years 1998 through 2006—about $271 million in real (2005) dollars. Since 1990, the Emergency Relief program has frequently relied on supplemental appropriations to make up for the fiscal imbalance created by a static authorization coupled with additional program demand from extraordinary events (see fig. 5). In total, from fiscal years 1990 through 2006, Congress provided about $12.3 billion for the Emergency Relief program when including both annual authorizations and supplemental appropriations. As a result, a large majority of the funds—$10.6 billion, or 86 percent of the total during this period—have come through supplemental appropriations. There has been a consistent shortfall between the static $100 million annual authorization and the actual amounts needed for the Emergency Relief program (see fig. 6). As a result, between fiscal years 1990 and 2006, Congress passed supplemental appropriations for the Emergency Relief program 15 times. Historically the supplemental funds were drawn from the Highway Trust Fund which at the time had accumulated large balances. However, the Highway Trust Fund authorization is limited to $100 million, and under SAFETEA-LU, additional supplemental funds are to be appropriated from the General Fund. The fiscal year 2006 Emergency Relief program supplemental appropriations were taken from the General Fund as the Highway Trust Fund balances have diminished. Appendix IV provides a detailed list of supplemental appropriations from fiscal year 1990 through 2006. The Emergency Relief program has experienced reimbursement backlogs in recent years—-reaching as high as $741 million dollars in 2004—as a result of program demands from extraordinary events, declining real funding, and periodic supplemental appropriations. When nationwide Emergency Relief needs exceed available Emergency Relief funding, FHWA allocates the $100 million annual authorization proportionally to the states based on the ratio of the total available Emergency Relief funding to the total Emergency Relief needs. For example, if there are sufficient funds to pay for half of the approved allocations, all states receive half of the funds they requested. According to FHWA officials, once program funds are exhausted, states with eligible projects are placed on a reimbursement backlog list, which may build up over several years. As program funds become available with each new annual authorization, FHWA allocates the funds based on the reimbursement backlog list. States may provisionally utilize other federal-aid program funds to pay for projects while awaiting reimbursement from the Emergency Relief program. When Congress has provided the program with supplemental appropriations for extraordinary events, it has often included supplemental funds intended to clear the program’s accumulated backlog. However, according to FHWA officials, in the interim, when Congress does not provide supplemental appropriations to clear accumulated backlogs, states go without full reimbursement. While, according to FHWA officials, FHWA financial management systems do not track reimbursement backlogs, congressional conference reports reference reimbursement backlogs dating back as far as fiscal year 1997, with balances ranging from $259 million to $741 million. Reimbursement backlogs may tie up available state highway dollars and affect the timely construction and repair of road facilities. In order to prevent delays some state and local governments may borrow money to pay Emergency Relief program project costs, while other states may delay other planned nonemergency-related highway projects or delay permanent Emergency Relief program projects. During our site visits, we heard examples of the effects of reimbursement backlogs on the states we visited. For example, in North Dakota, state officials told us that one local government had delayed permanent Emergency Relief program road repairs until reimbursement funding became available. State officials in Mississippi delayed some regular federal-aid highway projects in order to fund Hurricane Katrina-related Emergency Relief projects while waiting for supplemental appropriations to provide reimbursement funding. Mississippi officials also stated that these regular state and federal-aid highway projects were delayed until Emergency Relief reimbursements were received. In addition, Mississippi officials told us that they also utilized an established line of credit to fund some Emergency Relief projects and maintain some of their other planned projects while awaiting Emergency Relief reimbursement. Similar to the states we visited, federal land management agencies may also be affected by reimbursement backlogs. FHWA officials told us that on several occasions, federal land management agencies delayed initiating a needed repair because of lack of reimbursement funding. FHWA officials also told us that federal land management agencies are particularly burdened because they do not have highway infrastructure funding streams comparable to those of states. In almost all of our site visits, program officials stated that the Emergency Relief program’s reimbursement backlogs (i.e., delayed reimbursements) are a fiscal burden on state and local governments. This can be particularly true for states with smaller highway budgets, such as Mississippi and North Dakota, which may have less available highway funds to utilize while experiencing reimbursement delays than other states. Estimates from both the Congressional Budget Office (CBO) and the President’s budget project the steady decline of the Highway Trust Fund balance, as estimated outlays exceed estimated revenues each year for 2006 through 2011. According to CBO, the uncertainty associated with Highway Trust Fund estimates implies that the Highway Trust Fund could exhaust its resources before the anticipated 2009 date. Because it is not possible to anticipate supplemental appropriations, depending on how future emergencies are funded, the Highway Trust Fund’s future demand projections may not fully reflect the Emergency Relief program’s future effect on the fund. Furthermore, future demand for a program driven by unpredictable events is necessarily uncertain. The results of the Highway Trust Fund’s declining balance can be seen in the two most recent supplemental appropriations to the Emergency Relief program. In the past, because the Highway Trust Fund maintained significant unexpended balances, the Emergency Relief program’s supplemental appropriations have been funded through the Highway Trust Fund. SAFETEA-LU designated the General Fund as the source for additional Emergency Relief funds, and the most recent two supplemental appropriations, passed in December 2005 and June 2006 to cover Hurricane Katrina costs and backlogged projects, have come from the General Fund. The change is at least in part due to the financial uncertainty of the Highway Trust Fund. According to the Congressional Research Service (CRS), because of the declining Highway Trust Fund balance, using the Highway Trust Fund for the Hurricane Katrina Emergency Relief supplemental appropriations would have constrained the ability of the Highway Trust Fund to fully fund the SAFETEA-LU- authorized highway programs over the life of the authorization. For these reasons, it was doubtful that the Highway Trust Fund could fund other large future Emergency Relief program supplemental appropriation needs. Under the Highway Trust Fund’s current structure, the historic pattern of funding major Emergency Relief projects from the trust fund is no longer sustainable. However, the alternative used in the most recent appropriations, the General Fund, also faces future demands that will place severe pressures on all discretionary programs, including those that fund transportation. Our simulations show that by 2040, revenues to the federal government might barely cover interest on the debt—leaving no money for either mandatory or discretionary programs—and that balancing the budget could require cutting federal spending by as much as 60 percent, raising taxes by up to 2½ times their current level, or some combination of the two. This impending fiscal crisis means that it will be difficult to fund extraordinary highway disaster needs for highway repairs and for other programs from this source. While the Emergency Relief program has experienced a fiscal imbalance, FHWA officials do not routinely recapture unused funds. These unused funds may come from (1) unobligated balances available to the states, (2) obligated balances where the funds are no longer needed to complete projects, or (3) funds Congress has directed to specific disasters that remain available after the projects are completed. FHWA officials explained that states may retain these unused Emergency Relief obligations after projects are completed, and those funds can be used for future disasters in the state. However, while states with completed projects retain these unused obligations for future disasters, other states with immediate Emergency Relief needs may experience a reimbursement backlog. While FHWA officials said they are currently beginning to identify state- obligated funds that show no activity for a given time period, the agency has not moved to recoup unneeded funds. FHWA’s Office of Financial Management can query program data to identify federal-aid contracts with obligated funds where there has been no expenditure or payment activity for 1 year, or 2, or more. Our analysis of FHWA financial data found there to be over $158 million in inactive unexpended balances from Emergency Relief program allocations between fiscal years 1985 and 2006. Program officials acknowledge that allowing states to hold on to inactive unexpended balances to pay for future events enables states to bypass any backlog queue and fund their projects before older projects in other states are addressed. However, the amounts that could be recaptured from these sources are too small to put the program on a solid financial footing. In addition, the Emergency Relief Manual states that FHWA headquarters officials should coordinate with FHWA division officials to identify unobligated Emergency Relief balances that states will not use by the end of the following fiscal year and reallocate these funds to states with immediate Emergency Relief funding needs. Unobligated funds may occur when a state’s estimated need for a disaster exceed actual project costs. The practice of identifying and reallocating unobligated funds is intended to avoid accumulating a large balance of allocated but unobligated Emergency Relief funds and to help manage available funds nationwide as effectively as possible. Emergency Relief program officials told us that identifying unneeded unobligated balances is difficult and there has not been a specific effort to identify these funds in recent years. According to FHWA officials, these funds may remain because projects have not been completed or have not fully utilized available program funds at the close of the fiscal year. The unobligated balance at the end of fiscal year 2006, which includes funding for the 2005 Gulf hurricanes and other funds yet to be obligated for ongoing projects, was over $1.8 billion. Finally, events with designated supplemental appropriations may have remaining funds that cannot be used for any other disaster. Congress has on occasion provided a supplemental appropriation to the Emergency Relief program with designated funds to be used for specific disasters. It has done so for disasters such as the Loma Prieta earthquake, Hurricane Andrew, the attacks on the World Trade Center, and Hurricane Katrina. Unless specifically worded otherwise, these funds cannot be recaptured by FHWA and used for other Emergency Relief disasters. Congress has more recently used language that allows for unused designated funds to be used for other approved Emergency Relief projects. However, this language was not always used in the past and has resulted in unneeded balances that cannot be recaptured by FHWA. As a result, these balances remain unexpended unless the state uses the funds for additional work related to damage from the disaster. During our site visit to California, we found the state still has $62 million in obligated but unexpended Emergency Relief funds designated for the 1989 Loma Prieta and 1994 Northridge earthquakes. It is unlikely that most of these funds, particularly those for the Northridge earthquake, will be needed for additional work, according to California Department of Transportation (Caltrans) officials. However, these funds remain at the state level, and barring a rescission by Congress, remain available until expended. Given that these events took place 17 and 12 years ago respectively, the emergencies have long since passed, and it is reasonable to expect related emergency projects to be complete. Moreover, because the damage occurred on the federal-aid system, the state could still use its normal federal-aid highway funding to pay for any small residual cost, if the need arose. For these reasons, these funds are potentially available for rescission. The expansion of program eligibility criteria to fund larger and more costly projects and congressional action to increase funding for certain projects or disasters above what the program would ordinarily provide have both contributed to the fiscal imbalance and concerns about long-term sustainability of the program. Law and regulations define qualifying criteria for disaster events, and link the federal share of funding under the Emergency Relief program to the share of funding provided under other federal-aid highway programs. However, environmental requirements, community concerns, congressional direction, and unique localized circumstances have increased the scope and costs of projects, increased the portion of project costs funded by the program, expanded the definition of program-eligible events, and resulted in projects that go beyond the original intent of the program. These include instances that go beyond restoration, involve replacement rather than repairs, entail expansion of the type of work that the program may fund, or involve waivers of the federal match. Emergency Relief program regulations define disaster events that qualify for program funding—and set criteria for projects that can be funded— which help contain program expenditures. For instance, regulations define eligible events as natural disasters—sudden and unusual natural occurrences, such as floods, hurricanes, landslides, and earthquakes—and catastrophic failures—the failure of a major segment of a highway due to an external cause. Additionally, the program is not intended to supplant other federal or state funds for correction of preexisting nondisaster- related deficiencies. It is expected that restoration to predisaster conditions will be the typical type of repair accomplished through the Emergency Relief program. FHWA’s Emergency Relief program regulations limit the types of work that are eligible for program funding. The regulations state that betterments—additional features or improvements that change the function or character of the highway facility—are eligible for funding only if they are economically justified. That is, when the cost of the betterment is weighed against the risk of recurring damage that would be eligible for Emergency Relief funding and the cost of future repairs. The regulations also state that except for those cases where betterments are justified, the total cost of a project eligible for Emergency Relief funding may not exceed the cost to repair or reconstruct a comparable facility. However, where it is not feasible to repair or replace an existing highway facility at its existing location, an alternative selected through the NEPA process, if comparable to the destroyed facility, is eligible for Emergency Relief funding. Emergency Relief program regulations also establish various dollar-limit criteria that define program eligibility and funding for an affected state. By law, FHWA can provide a state with up to $100 million in Emergency Relief funding for each natural disaster found eligible for funding. Also, each prospective damage site must have at least $5,000 of repair costs to qualify for funding—-a threshold intended to distinguish emergency relief work from maintenance. Some emergency relief projects require a comprehensive environmental review, and when such reviews take place, the project may expand significantly in scope and cost. Repair projects funded under the Emergency Relief program, like other federal-aid highway projects, must comply with the requirements of NEPA. NEPA, which applies to all federal agencies, and to states receiving federal funding, requires an assessment of the environmental impact of federal programs and actions. Emergency repair projects to restore existing facilities qualify as “categorical exclusions” under NEPA, and normally do not require any further environmental study or mitigation. However, large projects such as replacing a bridge or relocating a length of roadway that has been destroyed can trigger a need for more extensive review—-an environmental impact statement (EIS) or an environmental assessment (EA). An environmental impact statement presents a range of proposed alternatives for a project and analyzes the cumulative effects of each. The EIS process also requires public notice of relevant hearings and meetings, and the draft and final EIS are made available for public comment. An environmental assessment may be required for a project that does not clearly qualify as a categorical exclusion or clearly require an EIS. The environmental assessment process concludes with either a finding of no significant impact or a decision that an EIS is required. The process of completing an EIS can result in a finding that replacing the destroyed facility at the same site is not possible, and that a more costly relocation that addresses environmental or community concerns is needed. The NEPA process addresses environmental issues, but the hearings that are part of the process allow the public and other interested parties to raise other concerns. The need to address both public concerns and the NEPA process has resulted in the Emergency Relief program funding larger and more costly projects than it might have otherwise approved under the Emergency Relief program. One such project followed the Loma Prieta earthquake. In October 1989, the Loma Prieta earthquake struck northern California, collapsing a two-tiered portion of Interstate 880 through Oakland known as the Cypress Viaduct. Immediately after the earthquake, FHWA and Caltrans planned to replace the Cypress Viaduct as it existed prior to the earthquake, and FHWA prepared a cost estimate of $306 million. However, this route had divided an Oakland neighborhood, and opposition from residents and the city government led Caltrans to consider several alternative alignments. Because of the size and complexity of these alternatives, an environmental impact statement was required. After completion of the EIS in 1992, Caltrans selected an alignment that replaced the original 1.5-mile structure with a 5-mile structure that circumvented the neighborhood. GAO reported on the status of this project in May 1996. As we noted then, the Emergency Relief program regulations allow for funding betterments—such as relocation, replacement, upgrades, or added features—only when they are economically justified to prevent recurring damage. Although the Cypress Viaduct relocation involved a significantly different design with more extensive construction and higher costs, FHWA officials approved the relocation based on the results of an EIS, and did not consider the project a betterment. Therefore, Emergency Relief program regulations, which place limits on funding improvements or changes in the character of a destroyed facility, were not applicable. Emergency Relief funding for the relocated Cypress Viaduct was approved without (1) making a finding that relocation was economically justified to prevent recurring damage, or (2) placing limits on the use of Emergency Relief funds. The project was carried out as a permanent restoration project and completed in 1998 with the Emergency Relief program funding approximately $811 million of the more than $1.0 billion project cost. In another case, the environmental review process led to the Emergency Relief program funding a very large project to relocate a section of a cliff- side highway that has been frequently closed by slides. The cost of this project will also exceed the recent costs to the Emergency Relief program of keeping the current highway open. The Devil’s Slide area in California is a formation of steep, geologically unstable cliffs on the Pacific coast, south of San Francisco. State Route 1 (S.R.1), originally constructed in 1937, runs along the coast at the base of Devil’s Slide, and has long been subject to recurring rock slides. From 1982 to the present there have been three significant Devil’s Slide events that have cost the Emergency Relief program $17 million to reopen S.R. 1. Following a major landslide over the winter of 1982-1983 that closed S.R. 1 for nearly 3 months, Caltrans began to pursue relocating S.R.1 away from the slide area. The Devil’s Slide project required a full environmental impact statement, which was begun in 1983 and completed in 1986. The EIS set out three options, one of which involved relocating S.R.1 inland, away from the slide area, and FHWA selected this as the preferred alternative. The environmental document was challenged in U.S. District Court, and the project was enjoined in September 1986, prior to the start of any construction. In orders issued in 1989 and 1990, the court ultimately determined that the EIS was deficient only in regard to noise impacts. Thereafter, FHWA and Caltrans began work on a supplemental EIS to address noise impacts. In the years that had passed since the original EIS, community attitudes had begun to shift in favor of relocating S.R.1 by way of a tunnel through San Pedro Mountain behind Devil’s Slide. Public comments in the 1995 hearings for the supplemental EIS, and a local referendum in 1996, called for consideration of a tunnel alternative. A second supplemental EIS, completed in 2002, resulted in selection of a tunnel route. FHWA had previously determined that the federal share for an emergency relief project is guided by the rules and regulations in effect at the time of the disaster. In the case of Devil’s Slide, that is the Surface Transportation Assistance Act (STAA) of 1982, which established the federal share as 100 percent. Also, the Transportation Equity Act for the 21st Century (TEA-21), enacted in 1998, had directed that the Devil’s Slide project was Emergency Relief program eligible. The current Devil’s Slide project is a pair of 4,200-foot-long, 30-foot-wide tunnels through the San Pedro Mountain, connected at the north end to a 1,000-foot bridge spanning a valley, and connected at the south end to a realignment of S.R.1. Construction began in early 2006, more than 20 years after the 1982-1983 event. The bridge portion is currently under construction, and a contract was awarded for the tunnel portion in December 2006. The total project will cost an estimated $441 million, and is scheduled to be completed in 2011. FHWA has allocated $241 million for the project, and an additional $200 million in future Emergency Relief funds will be needed to complete the project. Following the completion of the Devil’s Slide project, Caltrans will relinquish the bypassed section of S.R.1 to the county, which will maintain it for bicycle and pedestrian use. During the two decades that the Devil’s Slide project has been delayed, S.R.1 has remained open, and subject to periodic slides that resulted in road closures, including a 5-month closure in 1995 that cost about $3 million to clean up, and a closure from April to early August in 2006 that cost $12 million in Emergency Relief funding. S.R.1 carries significant commuter and business traffic through the Devil’s Slide area, and road closures due to slides have been a significant hardship for commuters and the local communities. However, the goal of the Emergency Relief program is to restore damaged or destroyed roadways to essential traffic, which in the case of Devil’s Slide had been accomplished through cleanup and restoration. As a long-standing problem, replacing S.R. 1 with a tunnel could have been addressed through the state’s regular federal-aid highway program. Congressional action has increased the amount of Emergency Relief program funding provided to certain disasters and projects. The devastation caused by Hurricane Katrina at the end of August 2005 included the destruction of the 1.6-mile U.S. Highway 90 Biloxi Bay Bridge in Mississippi (see fig. 8). The bridge provided essential emergency, commercial, and residential traffic between the city of Biloxi, Mississippi, and the city of Ocean Springs across Biloxi Bay. The original bridge was a four-lane bascule bridge. Mississippi Department of Transportation (DOT) proposed to replace it with a six-lane high-rise fixed structure bridge. Mississippi DOT justified the increased capacity, from four lanes to six lanes, based on a prehurricane traffic model that was not updated to consider posthurricane projections. An environmental assessment for the replacement bridge project was completed in November 2005 with a finding of no significant impact, but other issues were raised in the course of Mississippi DOT working with the communities through the NEPA process. These included accommodations for pedestrian and bicycle traffic and protection of existing trees, but a more significant concern was raised by a local shipbuilder about the proposed height of the new bridge. According to Mississippi DOT officials, DOT initially proposed a bridge that would provide an 85-foot clearance above Biloxi Bay. During a public comment period on the proposed bridge design, a local shipbuilder expressed concern that the height was not sufficient to allow for future ships to pass under the bridge. Mississippi DOT revised its proposed bridge design to provide a 95-foot clearance, which increased the cost of the bridge from an estimated $275 million to the current cost of $339 million. As noted in the November 2005 final environmental assessment document, the plan was to limit Emergency Relief program funding to the portion of the project required to reestablish the function of the original bridge, widen the structure to six lanes, and construct it to current standards—other work would be eligible for funding with normal federal- aid program funds. However, in December 2005, Congress passed an emergency supplemental appropriation that addressed the Gulf Coast hurricanes of 2005, and authorized 100 percent federal funding for the repair or reconstruction of hurricane-damaged highways, roads, and bridges. This effectively included the Biloxi Bay Bridge. As of December 2006, construction of the new bridge has begun, with completion expected in May 2008. Another instance of Congress increasing the Emergency Relief program’s funding to a project followed Hurricane Ivan striking the Florida panhandle near Pensacola in September 2004, causing severe structural damage to both spans of the I-10 Bridge over Escambia Bay. In the aftermath of the hurricane, the Florida DOT decided it would replace rather than repair the bridge, because of the age and the extent of damage to the old bridge. Like the old bridge, the new bridge would also have two spans, but built to a higher elevation to better protect against storm surge damage, with three lanes on each span—increasing the capacity of the old bridge. Under FHWA’s Emergency Relief Manual, program participation in project funding can be limited depending on the circumstances involved. Specifically, when repair and restoration of a damaged facility are possible, but the state prefers to build a replacement facility, Emergency Relief funding can be limited to what the program would have contributed to the cost of repairing the damaged facility. FHWA estimated the cost to repair the original bridge to be $179 million. FHWA division officials were in discussions with the Florida DOT about the level of Emergency Relief program funding for the project when, in December 2004, passage of the Consolidated Appropriations Act of 2005 directed that replacement of the Escambia Bay Bridge be federally funded. The program would fund 90 percent of the project cost, the federal share for work on interstate highways. As of December 2006, the bridge is under construction, with one of the spans nearing completion, and FHWA officials informed us the entire bridge project is expected to be finished ahead of the scheduled December 2007 completion date at an estimated cost of $245 million. Although FHWA could have limited the Emergency Relief program’s participation to 90 percent of the prospective repair cost of the Escambia Bay Bridge, congressional action ensured that the Emergency Relief program would have a larger financial commitment in the project. In the Emergency Relief program regulations, a natural disaster is described as a sudden and unusual natural occurrence, and a catastrophic failure is described as the sudden failure of a segment of the highway system due to an external cause. In one circumstance, Congress and FHWA have decided that a gradual and predictable basin flooding event, which was not a sudden occurrence, warranted treatment as a disaster for Emergency Relief program eligibility, and have defined its eligibility in legislation, regulation, and revisions to the Emergency Relief Manual. Devils Lake in North Dakota lies in a large natural basin and lacks a natural outlet for rising water to flow out of the lake. Starting in the early 1990s, the lake level has risen dramatically—over 25 feet from 1993 to the present. The volume of water in the lake has quadrupled in that time, flooding or threatening nearby communities, farms, reservation lands, and roads (see fig. 9). According to North Dakota DOT officials, many roads in the Devils Lake area were built in the 1930s and 1940s, when the lake’s water levels were near their historic low point. Initially, the approach to preserve roads from being inundated was to buildup the grade of roads that were threatened by the rising waters of Devils Lake. FHWA amended its Emergency Relief program regulation in December 1996 to explicitly provide that raising road grades in response to an unprecedented rise in basin water levels was an Emergency Relief-eligible activity. FHWA’s next Emergency Relief Manual revision in 1998 identified basin flooding as an Emergency Relief-eligible disaster. In April 2000, FHWA also issued a memorandum that provided authorization for grade raises in basin flooding situations based on forecasted rising water levels—a unique provision for the Emergency Relief program, which otherwise funds only postdisaster repair or restoration. Some roads have already had their grades raised more than once, and according to North Dakota DOT officials, one bridge had been built up three times in 4 years. As of September 2006, North Dakota DOT officials informed us that they had essentially completed raising the road grades to the levels currently allowed, based on existing forecasts for lake levels, but further grade raises might be necessary in the future if lake levels continue to rise. As of September 2006, the Emergency Relief program has funded over $145 million for projects related to Devils Lake flooding. Additional problems at Devils Lake led to Congress authorizing FHWA to fund an additional type of project through the Emergency Relief program. According to North Dakota DOT officials, grade raises to roads in the Devils Lake area begun in the mid-1990s were constructed with culverts embedded in the roadway embankments to allow water to flow through the embankment, in order to equalize water pressure on each side of the raised roadway. According to North Dakota DOT and FHWA division office officials, in 1997 some communities and the local Indian reservation plugged some of these culverts, without FHWA’s or the state DOT’s knowledge, to prevent water from flowing through and onto their land. As a result, in these areas, the raised roadways were now acting as dams, which increased their risk of failure. As additional grade raises to these roads became necessary, FHWA was prohibited by regulation from authorizing additional work on such roads unless their safety could be certified by the agency responsible for the safety of dams—in this case the Army Corps of Engineers. However, the Corps of Engineers determined that it could not certify the safety of the existing roads acting as dams without major modifications, such as the construction of additional embankments. In 2005, the passage of SAFETEA-LU reauthorized the FHWA highway program, and authorized up to $10 million of Emergency Relief program funds to be expended annually, up to a total of $70 million, for work in the Devils Lake region of North Dakota to address roads acting as dams, which were not previously eligible for Emergency Relief funds. This $10 million comes out of the $100 million annual authorization of contract authority that funds the Emergency Relief program, effectively reducing Emergency Relief funding available to other states to $90 million. SAFETEA-LU also included language authorizing FHWA to carry out necessary work in connection with Devils Lake roads acting as dams, and it exempts the work in the Devils Lake area from the need for further emergency declarations to qualify for Emergency Relief funding. As of September 2006, FHWA has been working with the Bureau of Indian Affairs to address high-priority sites on the Indian reservation adjacent to the lake where roads were acting as dams, and it has been meeting with North Dakota DOT and the Corps of Engineers to develop solutions for other sites. These solutions may include building dams or dikes to control lake flooding or protect the raised roadways. While the damage and financial loss caused by this flooding are very real, defining a gradual and predictable event—which is not a sudden occurrence—as an eligible disaster represents a broadening of the definition of what is a disaster for purposes of the Emergency Relief program, and places an additional claim on limited program funding. The North Dakota DOT estimates the cost of all of the additional work at Devils Lake may well exceed $200 million. In its first fiscal year 2006 supplemental appropriation for the Emergency Relief program, Congress directed that the Emergency Relief program shall fund 100 percent of all repair and reconstruction of highways, roads, and bridges necessitated by Hurricanes Katrina, Rita, and Wilma, because the states’ resources were inadequate to deal with the string of disasters. For example, Mississippi’s allotment for Hurricane Katrina damage was about $1 billion, and a 20 percent local share would have cost the state about $200 million. To put the level of damage in perspective, total prior 2005 Federal-aid Highway Program funding for Mississippi was about $402 million. This can especially affect large replacement projects. For example, the original Biloxi Bay Bridge was on a noninterstate federal-aid highway and the Emergency Relief program would ordinarily fund 80 percent of the project cost. However, as a result of the supplemental appropriation, the Emergency Relief program will fund the full cost of the Biloxi Bay Bridge project rather than the 80 percent that would normally be funded under the program criteria. Also, as noted earlier, Congress authorized program funding for the replacement of the I-10 Escambia Bay Bridge. In the absence of congressional direction, the Emergency Relief program may have funded only 90 percent of the prospective repair cost of $179 million. There have also been other instances where Congress has waived the requirement for state matching funds or waived the limit on funding provided to any one state, to support states that have been overwhelmed by the costs of terrorist attacks or natural disasters. However, this has added to the costs borne by the Emergency Relief program. Congress authorized 100 percent federal funding for Emergency Relief program highway projects in its 2002 supplemental appropriation to fund recovery from the September 11, 2001, terrorist attacks. Congress has also acted to waive the $100 million maximum limit on the Emergency Relief program funding that could be provided to a single state for a disaster eight times since 1989—in the two supplemental appropriations cited above, and in six other supplemental appropriation acts. FHWA’s division offices have been inconsistent in how they identify eligible damage sites, which has a potential impact on program funding. The Emergency Relief Manual states that, generally, a site is an individual location where damage has occurred. However, a site could also incorporate several adjoining locations within a reasonable distance where similar damage has occurred, such as damage to traffic signs over an area. The manual cautions, however, that aggregating damage locations to form a site should be done with care, as it is not the intent of the Emergency Relief program to pay for damage that a transportation agency would normally perform as maintenance. We found that different FHWA division offices accepted differing definitions of what constituted a site. For example, in Florida, where hurricanes and storms have leveled signs and signals over a wide area, whole counties have been designated as sites. In California, where wildfires have destroyed signs and guardrails over a wide area, state DOT officials told us that 20- to 30-mile stretches of highway have been treated as single sites. On the other hand, an official in the Ohio division office said that he generally limits the scope of a site to the distance a person could see in both directions, although that is not an absolute rule. The physical size of a site that an FHWA division office will accept has implications for the Emergency Relief program, because a site must have at least $5,000 worth of damage to qualify for Emergency Relief funds. When a major disaster covers a large area, and there is clearly sufficient damage to qualify for Emergency Relief funding, treating widespread damage at a limited number of damage sites can simplify program administration. In addition, in the case of a more limited disaster—with damage around the $700,000 level needed to qualify for Emergency Relief funding—allowing sites to incorporate large areas, with a higher dollar amount of damage, might allow a state to qualify for Emergency Relief program funds, while a state held to a narrower site definition might not. There is a continuing need for a federal role to assist states in responding to and recovering from natural disasters. The long history of federal support to states to repair highway infrastructure in the wake of disasters, and the potential for states to be financially overwhelmed by the burden of the resulting costs, especially after extraordinary events, argues strongly for a continued Emergency Relief program. However, where a continued federal role is seen in the future, the nation’s pending fiscal crisis requires reexamining whether the current mission is fully consistent with the initial or updated statutory mission, whether significant expansion of scope has occurred, and whether a program is affordable and financially sustainable over the long term, given known cost trends, risks, and future fiscal imbalances. From this perspective, the Emergency Relief program faces sustainability concerns in the future, exacerbated by the gradual expansion of eligibility criteria that should be addressed. While predicting the future financial requirements of disasters is not possible in any precise way, on the basis of past demands on the program, it is reasonable to expect a continuing fiscal imbalance if the program remains at the current funding level. Thus Congress has the opportunity to establish a more sustainable funding level and to identify a stable long- term source of funding consistent with future demands. Given current projections on the status of the Highway Trust Fund and the recent history of large costs incurred by the states responding to disasters, the program does not appear to be sustainable in the long term if funding is derived from the Highway Trust Fund, as currently structured. In fact, the current authorization from the Highway Trust Fund does not cover the ordinary events states experience, and the supplemental appropriations from the General Fund are funding both extraordinary and ordinary events. The National Surface Transportation Policy and Revenue Commission can help—it will be examining alternatives to replace or supplement the fuel tax as the principal revenue source to support the Highway Trust Fund, and putting the Highway Trust Fund on a sustainable basis. In theory, sufficient revenues could allow all Emergency Relief funding, including funding for extraordinary events, to be financed by the Highway Trust Fund, the approach taken when the Highway Trust Fund held large balances. This would have the advantage of relying on a predictable source of revenue intended for highway projects as the source of the program. Alternatively, Congress could, as it also has done in the past, provide some or all emergency funding from the General Fund. This might be particularly appropriate for extraordinary events because such events are comparatively rare, can occur on a large multistate level, can overwhelm all levels of government, and cannot be reasonably planned and budgeted for. This would also place the Emergency Relief program on the same footing as FEMA’s disaster relief programs, which are financed through the General Fund. While this approach would help the short-term sustainability of the Highway Trust Fund, because the nation faces a long- term fiscal crisis, relying solely or heavily on the General Fund is a limited option. In order to put the program on a sound financial footing, additional alternatives to address the fiscal imbalance need to be considered. Revising the program’s criteria to place limitations on the use of Emergency Relief funds to fully finance projects with scope and costs that have grown as a result of environmental and community concerns is one possibility. Looking for alternative funding for projects designed to solve chronic problems, as opposed to immediate road opening needs, is another. These changes would place greater burden on the states, which would have to pay for project expansion driven by nonemergency factors and projects to address chronic, predictable conditions, while saving federal funds for larger disasters. The funding imbalance makes FHWA’s fiscal stewardship of the Emergency Relief program especially important. While the fiscal imbalance in the program is too great to be solved by improved stewardship by FHWA alone, FHWA is not routinely recapturing all unused program funds once a project is complete. In fact, states with immediate disaster needs experience reimbursement backlogs, while unused program funds are maintained by states with no current disaster needs. Furthermore, the lack of a standard definition of what constitutes a damage site opens the door for many smaller costs to be charged against the program, and may result in higher federal reimbursements. In order to put the Emergency Relief program on a sound financial footing, Congress should consider the expected future demands on the program and reexamine the appropriate level and sources of funding—including whether to increase the $100 million annual authorized funding and whether the Highway Trust Fund, the General Fund, or some combination would allow the program to accomplish its purpose in a fiscally sustainable manner. Congress should also consider tightening the eligibility criteria for Emergency Relief funding, either through amending the purpose of the Emergency Relief program, or by directing FHWA to revise its program regulations. Revised criteria could include limitations on the use of Emergency Relief funds to fully finance projects with scope and costs that have grown as a result of environmental and community concerns. In order to help put the Emergency Relief program on a more sound financial footing, we recommend that the Secretary of Transportation direct the Administrator, FHWA, to revise its emergency relief regulations to tighten the eligibility criteria for Emergency Relief funding, to the extent possible within the scope of FHWA’s authority. Revised criteria could include limitations on the use of Emergency Relief funds to fully finance projects with scope and costs that have grown as a result of environmental and community concerns. In order to improve FHWA’s financial oversight of Emergency Relief funds, FHWA should require division offices to annually coordinate with states to identify unexpended obligated and unused unobligated Emergency Relief funds that will not be needed for projects, withdraw the unneeded amounts, and determine if they are needed for other eligible projects. In the event these funds are not needed for other eligible projects, FHWA should identify these funds to Congress for rescission or to offset future appropriations. FHWA also should identify for rescission unexpended funds that have been directed to specific disasters when those funds are no longer needed. Finally, in order to ensure that similar types of events result in consistent determinations of eligibility, FHWA should clarify its Emergency Relief Manual to better specify the definition of a site, and whether under certain circumstances variations from the basic definition are permitted. We provided copies of a draft of this report to DOT for its review and comment. DOT provided its comments in an e-mail message on February 5, 2007. DOT generally agreed with the facts presented but took no position on our recommendations. DOT also provided technical comments, which we incorporated into this report as appropriate. We are sending copies of this report to congressional committees and subcommittees with responsibilities for DOT. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix VI. The objectives of this report were to review (1) the total amount of Emergency Relief program funding allocated to the states in recent years, how this funding was distributed among the states, and the events for which it was allocated; (2) the sources of funding used to finance these emergency relief allocations and the financial challenges facing the program; and (3) the scope of activities eligible for funding and the extent to which the scope of eligible activities has changed in recent years. To examine the total amount of Emergency Relief funding allocated to states in recent years, we interviewed and obtained documentation from the Federal Highway Administration’s (FHWA) Office of Financial Management and analyzed Fiscal Management Information System (FMIS) data on program trends, including allocations by state, total program allocations, unexpended balances of “inactive” projects, and unobligated balances. We assessed the reliability of the information system extracts and queries by reviewing relevant system documentation, interviewing agency officials who worked with the information system, and conducted manual data testing. We found that state allocations data were available only for two specific time periods—cumulatively beginning at program inception or the last 10 fiscal years—rather than from fiscal year 1985 to present as requested. We determined the data to be sufficiently reliable for analysis of state allocation data from 1997 through fiscal year 2006—the last 10 years. We also found that Emergency Relief projects may be initially funded through other federal-aid programs and converted to Emergency Relief funding in FMIS once funds are available. Consequently, the Emergency Relief program obligations in FMIS may not be exhaustive, as they may not include funds that will be converted to Emergency Relief. We determined that FMIS Emergency Relief program obligations data were not sufficiently complete for analysis of project obligations by state. These data were not used in any of our analyses and therefore had no impact on our findings. To examine the value of the annual $100 million authorization over time in constant dollars, we adjusted the $100 million authorization using the annual values of gross domestic product (GDP) price index for fiscal years 1972 through 2005. Fiscal year 2005 is the most recent year for which there were accurate GDP index annual values available. To examine the purposes for which Emergency Relief funds were allocated, we interviewed the Emergency Relief program manager and obtained data on program allocations by event from him, rather than using FMIS data. While FMIS contains fields that document disaster sequence number and fiscal year, there is not a simple way to calculate the total obligations by event. Event cost data have been maintained by the Emergency Relief program manager from fiscal years 1998 through 2006. We accessed the reliability of program data on allocations by event by interviewing the program manager and manually testing program data against congressional appropriations legislation and found the data to be sufficiently reliable for analysis of event cost from fiscal years 1998 through 2006. To examine the sources of funding used to finance the Emergency Relief allocations, we analyzed supplemental and annual authorizations using the legislative history of the Emergency Relief program from fiscal years 1990 through 2006. We also used the legislative history of the program from fiscal years 1985 through 2006 to obtain information on program reimbursement backlogs. Because FHWA officials do not maintain historical reimbursement backlog data, we relied on periodic references to reimbursement backlogs in the legislative history. To examine the scope of activities eligible for Emergency Relief program funding and the extent to which the scope of eligible activities has changed in recent years, we obtained and reviewed program manuals, guidance, and documentation for program eligibility criteria and policies and requirements. We also interviewed FHWA officials at U. S. Department of Transportation headquarters who are responsible for providing guidance and policies for the Emergency Relief program and the Emergency Relief program for federally owned roads. In addition, we conducted site visits to five states (California, Florida, Mississippi, North Dakota, and Ohio) and conducted interviews with state department of transportation and FHWA officials, including managers, team leaders, and engineers, that are responsible for the administration of Emergency Relief program, as well as other FHWA highway programs. We also interviewed officials from the FHWA’s Eastern Federal Land Highway Division Office in Virginia. We gathered relevant program documentation from each site visit, including project Detailed Damage Inspection Reports, environmental assessments, and cost analyses. To capture a variety of disaster events and projects, we selected five states considering (1) the dollar amount of program allocations from fiscal program inception through 2005, (2) the dollar amount of program allocations from fiscal years 2001 through 2005, (3) geographical dispersion, (4) whether the state sustained damage from the 2005 Gulf hurricanes, and (5) whether the states had Emergency Relief projects costing more than $1 million within the last 10 years. To examine the extent to which the scope of eligible activities has changed in recent years, we reviewed the legislative history of the program from fiscal years 1985 through 2006. We identified congressional waivers of program requirements such as the requirement for state matching funds and the $100 million maximum limit on program funding that could be provided to a single state per fiscal year. We conducted our work in California, Florida, Mississippi, North Dakota, Ohio, Virginia, and Washington, D.C., between April 2006 and December 2006 in accordance with generally accepted government auditing standards. Federal land management agencies (2004 hurricanes) In addition to the individual named above, other key contributors to this report were Steve Cohen, Assistant Director, and Ashley Alley, Robert Ciszewski, Colin Fallon, Don Kittler, and Amber Yancey-Carroll.
|
Since 1972, Congress has authorized $100 million a year for highway disaster recovery needs through the Federal Highway Administration's (FHWA) Emergency Relief (ER) program. Increasingly, the program's actual costs have exceeded this amount, and Congress has provided additional funding. Because of this fiscal imbalance between program funding and program needs, we reviewed ER under the Comptroller General's authority to determine the (1) total funding, distribution of funds among the states, and disaster events funded; (2) sources of funding provided and financial challenges facing the program; and (3) scope of activities eligible for funding and how the scope of eligible activities has changed in recent years. GAO's study is based on financial data, document analysis, stakeholder interviews, and site visits, among other methods. During the 10-year period of 1997 to 2006, ER provided about $8 billion to states, the District of Columbia, Puerto Rico, American territories, and federal agencies, a total of 56 states and other jurisdictions. About 70 percent of these funds has gone to 5 states--California, Florida, Louisiana, Mississippi, and New York--that have been especially affected by major disaster events, such as Hurricane Katrina. Since 1990, 86 percent of the ER program has been funded through supplemental appropriations as the program's annual demands have exceeded the $100 million annual authorization. Even excluding extraordinary disasters, those exceeding $100 million in eligible damage per event, the program still needed $271 million per year for smaller eligible events. Meanwhile, the program has been authorized at a constant $100 million level since 1972, resulting in the current authorization being worth about one-fourth the authorization level of 1972. Until Hurricane Katrina, Congress funded extraordinary disasters through the Highway Trust Fund, but with Trust Fund balances dwindling, in 2005, Congress designated the General Fund as the source of future ER supplemental funding. But the nation faces a pending fiscal crisis, raising concerns about future use of the General Fund and financial sustainability of the ER program. Despite funding concerns, FHWA does not routinely recapture unused program funds by reviewing the program's state balances to identify potentially unneeded funds. GAO also identified $62 million in potentially unneeded statutory allocations from past disasters that could be recaptured. Activities eligible for ER funding include the repair or reconstruction of highways and roads that are supported by the Federal-aid Highway program, and of roads on federal lands that have suffered serious damage from natural disasters or catastrophic failures due to external causes. ER funds are not intended to replace other federal-aid, state, or local funds to increase capacity, correct nondisaster-related deficiencies, or make other improvements. However, contributing to future financial sustainability concerns is the fact that the scope of eligible activities funded by the ER program has expanded in recent years with congressional or FHWA waivers of eligibility criteria or changes in definitions. As a result, some projects have been funded that go beyond repairing or restoring highways to predisaster conditions--such as the $441 million Devil's Slide project and $811 million I-880 project in California--projects that grew in scope and cost to address environmental and community concerns. Also, Congress and FHWA have expanded eligibility to allow additional types of work, such as a gradual flooding of a lake basin, to be funded. Congress has also directed that in some cases the program fully fund projects rather than requiring a state match. Finally, varying interpretations of what constitutes a damage site have led to inconsistencies across states in FHWA's application of ER eligibility standards.
|
The U.S. Information Agency’s (USIA) missions are to explain and advocate U.S. policy, provide information about the United States, build lasting relationships and mutual understanding among the peoples of the world, and advise U.S. decisionmakers on foreign public opinion and its implications for the United States. Over the years, its programs have shifted in emphasis from one part of the world to another in response to foreign policy initiatives and direction from the administration as well as to congressional mandates. Its budget for fiscal year 1995 was about $1.4 billion. Like other government agencies, USIA has faced the prospect of uncertain, but likely reduced, budgets as the administration and the Congress grapple with balancing the federal budget. In fiscal year 1996, USIA received about $1.1 billion, 17 percent less than it requested. The United States has engaged in foreign information programs, international broadcasting, and publicly funded educational and cultural exchanges for nearly 60 years. In 1938, the Congress began funding the first international educational exchange program in the sciences. U.S. funding of broadcasting began in 1942 with a 15-minute broadcast in German, which was soon followed by broadcasts in Italian, French, and English. After World War II, the Congress decided to introduce additional information and cultural programs overseas to promote U.S. policies and interests. In 1946, it mandated a peacetime international exchange program, beginning with the establishment of the Fulbright Exchange Program. This mandate was expanded to include numerous exchanges under academic, artistic, visitor, citizen, youth, and speaker programs. In 1948, the Congress permanently established a U.S. international information and cultural exchange program, which included the Voice of America (VOA). VOA’s objectives are to provide accurate, objective, and comprehensive news; portray a balanced view of American society; and explain U.S. government policy. VOA was followed by Radio Free Europe/Radio Liberty (RFE/RL) in the 1950s as a private nonprofit company to provide uncensored news to the Soviet Union and the Eastern bloc. In 1985 and 1990, Radio Marti and TV Marti, respectively, were established to fill a void in the news and information Cuban citizens received due to censorship by the Cuban government. Between 1981 and 1994, appropriations for public diplomacy programs increased as the Congress funded new missions such as additional exchanges for the newly independent states of the former Soviet Union. In fiscal year 1995, funding began a downward trend as a result of the consolidation of international broadcasting activities and reduction in exchanges and salaries and expenses. (see fig. 1.1). As shown in the figure, USIA received $1.077 billion in fiscal year 1996, of which about $405 million (38 percent) is being spent for international broadcasting and radio construction: VOA broadcasts in 47 languages, RFE/RL broadcasts in 21 languages, Radio and TV Marti’s broadcasts to Cuba, and Worldnet’s television broadcasts; $310 million (29 percent) is for personnel, infrastructure, outreach activities, and headquarters support for USIA’s overseas posts; $210 million (19 percent) is allocated for educational and cultural exchanges and related salaries and management expenses; $109 million (10 percent) is for headquarters salaries, operating expenses, a technology fund, and information programs; and $44 million (4 percent) is for grants to the National Endowment for Democracy, the East-West Center, and the North/South Center. In fiscal year 1996, USIA is operating its programs at 199 overseas posts in 143 countries, as compared to 200 posts in 125 countries in fiscal year 1981. The growth in country operations largely resulted from opening posts in newly independent countries. In 1981, USIA maintained 16 posts in Eastern European countries and the former Soviet Union, but it now has 31 posts in this region. At the same time, the level of its U.S. and foreign service national (FSN) personnel decreased; in 1981, USIA employed 642 Americans and 2,960 FSNs overseas, and by the end of 1996 it will have 523 American and 2,318 FSN employees. Additionally, RFE/RL employs approximately 400 staff at its offices in Prague in the Czech Republic, VOA employs about 600 personnel to manage transmission sites, and both fund correspondents. Since the end of the Cold War, the Congress has appropriated funds to establish new exchange programs. For example, in 1992, the Congress authorized the Edmund S. Muskie Fellowship Program to provide grants for graduate study to educators, government officials, business leaders, and scholars from Soviet bloc countries for study in law, economics, business, and public administration. In 1994, the Congress established the Mike Mansfield Fellowship Program for U.S. citizens to study the Japanese language and to serve 1 year in an office of the government of Japan. The U.S. Agency for International Development (USAID) transferred funds for USIA’s exchange programs budget for Eastern and Central Europe (fiscal year 1991 to present) and the former Soviet Union (fiscal year 1993 to present). Table 1.1 shows USIA appropriations by major account for fiscal years 1981-96. In fiscal year 1995, USIA spent at least $3 million on programs in more than 40 countries and more than $11 million on programs for 7 countries (see table 1.2). Cuba was the largest single target of public diplomacy programs; in fact, expenditures for Radio and TV Marti exceeded the total costs for all activities targeted for any other country. Further budget restrictions are likely given the degree of budgetary stringency that the federal government is expected to face in the next few years. If total discretionary spending is held to levels envisioned in this year’s congressional budget resolution, it will fall by almost 6 percent by 2002. It will be difficult to exempt USIA from bearing its share of this decrease. Various funding scenarios indicate the degree to which USIA’s budget could be reduced. While the fiscal year 1997 request is slightly higher than the $1.1 billion appropriated for fiscal year 1996, the Office of Management and Budget (OMB) has projected that funding for USIA will be reduced thereafter, falling below $900 million by fiscal year 2000. Under the fiscal year 1996 7-year concurrent budget resolution, USIA could receive less than OMB projected. If USIA were to take a proportional share of the proposed reductions in the international affairs account, the congressional plan would call for reductions in funding for USIA to $865 million by 2000.Because of inflation, costs for current services would increase by about 4 percent. Additionally, the fiscal year 1997 House budget resolution assumes that much of USIA will gradually be privatized or eliminated. Therefore, there is a substantial gap between the costs of maintaining the status quo and potential funding levels. Because of these fiscal constraints, USIA may not be able to continue all of its current functions and operations. Three funding scenarios for USIA are shown in figure 1.2. The Chairman of the House Committee on the Budget asked us to examine USIA’s reform and cost-cutting efforts and identify options that could enable USIA to adjust to reduced budgets. Our review focused on activities of the primary components of USIA activities, namely overseas posts, educational and cultural exchange programs, and international broadcasting. For each component we looked at the reinvention and cost-cutting activities to date, plans for the future, and potential options for additional cuts should funding be significantly cut. We identified options for lowering costs within the context of specific programs and activities by reviewing Agency studies and USIA Inspector General reports, meeting with a wide variety of Agency officials, and analyzing whether current conditions and program objectives remained relevant when compared to conditions when the programs first began. We did not attempt to evaluate the importance and relevancy of public diplomacy as a foreign policy tool or assess the amount of funding that would be appropriate. We met with officials and reviewed documents at USIA and State Department headquarters in Washington, D.C., and at offices in Germany and the Czech Republic. We also met with officials from private organizations that sponsor exchanges, U.S. private companies, and foreign government officials to gain their perspective on USIA programs and used the results of a 2-day GAO symposium on foreign affairs issues. We conducted work at RFE/RL in the Czech Republic. We also relied on work previously conducted at USIA headquarters and at posts in Mexico and Guatemala and our prior reports on international broadcasting and exchange programs. We conducted our review between August 1995 and August 1996 in accordance with generally accepted government auditing standards. USIA has been involved in a multiyear restructuring plan to become a leaner, more efficient, and more technologically advanced agency. USIA has reduced staff, eliminated some activities, and reengineered some of the ways it operates. Both the Congress and the administration have been instrumental in this change. The House Conference Report on Fiscal Year 1993 Appropriations for the Department of State indicated that the amount of funds provided to USIA for fiscal year 1993 would be substantially below the amounts needed to maintain then-current program levels and allow USIA to open new posts in the countries of the former Soviet Union. As a result, the report directed USIA to review all programs, including the library programs at USIA reference and research centers, and submit proposals for reductions in lower priority programs. In 1993, Vice President Gore’s National Performance Review and OMB targeted many agencies for cuts. The President directed USIA to submit a streamlining plan to the Director of OMB by December 1993. The plan was to address the goal of reducing by half the number of supervisory or managerial positions, reducing micromanagement and red tape, realizing cost savings, and improving the quality of service and productivity. Furthermore, the National Performance Review recommended the consolidation of all nonmilitary international broadcasting to help meet the President’s deficit reduction plan. It was anticipated that the consolidation would achieve savings of $400 million over a 4-year period. In response, the Agency developed a multiyear strategy to downsize and reinvent its operations. The consolidation of VOA and RFE/RL eliminated RFE/RL’s engineering and transmission function and overlapping broadcast hours to the same target nations, which resulted in a 32-percent reduction in total broadcast hours. Both VOA and RFE/RL have reduced the size of their workforce since fiscal year 1994 in response to these changes. The International Broadcasting Bureau eliminated over 300 positions, and RFE/RL eliminated over 1,150 positions, reducing costs significantly. Table 2.1 compares fiscal year 1994 funding with the fiscal year 1997 request to indicate the annual savings achieved through the consolidation. In addition to broadcasting consolidation, USIA has taken a variety of measures to reinvent itself. USIA reduced its headquarters staff, targeting duplication, bureaucratic layering, outmoded activities, and the inefficient distribution of its workforce, and began a reorganization. It replaced its Policy and Programs Bureau with a smaller Information Bureau, which eliminated more than 150 positions and both removed management layers and developed systems and information to best support the needs of the posts. The Information Bureau is now using advanced communications technology, such as digital video conferencing, to perform traditional agency functions in a more cost-effective manner. Furthermore, USIA established a technology modernization fund and created a technology steering committee to enhance technology planning and decision-making. In response to congressional concerns, USIA also began converting walk-in libraries in developed countries to technologically advanced information centers. USIA also surveyed its overseas posts to determine the relative impact and priority of its activities and programs. In response, USIA eliminated the production of regional magazines and exhibits that Agency officials determined were no longer the most effective means of reaching foreign decisionmakers. In fiscal year 1996, USIA is streamlining its overseas operations by eliminating about 115 U.S. and 435 FSN positions. USIA also plans to review the structure and staffing of its Educational and Cultural Affairs Bureau, and work with other foreign affairs agencies to establish common administrative services, and eliminate unnecessary and duplicative practices. Reengineering and new automated administrative processes are expected to save $8 million and further reduce related staffing. Furthermore, USIA is participating in the review by the President’s Management Council to increase the effectiveness of the U.S. overseas presence while reducing its cost. The review will focus on streamlining overseas operations, sharing administrative support, and making better use of information systems and communications technology. In recognition of some of its achievements, the Vice President presented USIA a National Performance Review “Hammer Award.” This award symbolizes the federal government’s commitment to tearing down bureaucracy and rebuilding a new government that works better and costs less. In commenting on a draft of this report, USIA said that it is instituting changes to cut costs while preserving essential missions. USIA said that its core functions remain valid and that “preventive diplomacy” can help spare America from more expensive crisis-driven international engagements. USIA stated that it understands that the intrinsic value of many traditional programs is no longer enough to justify their continuation; there must be direct benefit to U.S. policy interests. USIA conducts a wide variety of activities at its 199 overseas posts to inform and encourage relationships with the public in 143 countries. These activities range from disseminating information on U.S. policy to managing academic exchange programs. (These activities are discussed in app. I.) In fiscal year 1996, USIA is spending $310 million, or about 29 percent of its budget on personnel, infrastructure, programs, and headquarters activities to support overseas operations. If appropriations are substantially reduced, USIA may wish to consider reducing (1) the number of countries in which it has posts, (2) the staff size of posts and level of overseas activities, and (3) the infrastructure it maintains overseas. When weighing these options, consideration must be given to the extent to which the reductions will diminish the U.S. ability to influence foreign publics. USIA’s philosophy has been to maintain a public diplomacy presence where a State Department mission is located. Closing posts can yield significant cost reductions, especially if posts have large infrastructure or personnel costs. Costs of overseas posts range from $43,000 in Suriname to $19.2 million in Japan. USIA has closed some posts. However, the Agency has maintained posts in countries that, by its own criteria, are relatively less important to U.S. interests. In fiscal year 1995, USIA operated posts in 67 countries at a cost of more than $36 million (plus $23 million in exchanges), where USIA believed the United States had limited public diplomacy goals. One post, for example, had four U.S. foreign service officers and 21 FSNs assigned and cost $1.4 million a year to operate. Exchanges to that country cost an additional $643,000. In another country, three U.S. and five foreign nationals were employed, with post and exchange costs totaling $2.5 million. In April 1996, USIA announced that it was abandoning its principle of universality because of budget constraints. USIA states that as a result, its current overseas presence is 13 posts and 4 countries below fiscal year 1995 levels. Developing baseline requirements for activities and the number of people needed overseas could help USIA to accommodate budget reductions. Overseas staffing levels are particularly important because personnel costs consumed more than half the cost for USIA’s overseas operation in fiscal year 1995. In some countries, these costs were even higher: personnel costs in Germany and Japan consumed 65 percent and 68 percent, respectively, of the total USIA country budget. In Germany, USIA spent $9.5 million for the salaries and benefits of 126 FSNs, and $2.4 million for 26 U.S. employees. USIA is now downsizing its overseas personnel levels, which cost approximately $168 million in fiscal year 1995. Individual posts may be losing one or more positions based on a 1993 USIA system that ranks countries by importance and other characteristics, such as population and size, and USIA’s potential to affect U.S. interests there. For each rank, USIA established a benchmark in 1994 for personnel levels and programs, and posts with staffing levels above this benchmark would lose staff. However, this benchmark is based on historic staffing and program levels and therefore may be above actual staffing requirements. For example, one USIA official stated that USIA management recognizes that the post in Germany will be too large even after downsizing, as it reflects the structure established after World War II. A more comprehensive approach would be required if there were a major funding reduction. In June 1995, the USIA Director decided to design a strategic vision for its overseas presence, assuming that funding could be reduced by as much as 40 percent. The model acknowledged that the Agency could no longer afford to do some things, and that many posts might have to drop entire products, services, and programs. In response, Agency managers developed a building-block approach under which all USIA overseas posts would include a core advocacy mission and USIA would add information activities, exchanges, and cultural programs only if a country were important to U.S. interests and if USIA’s efforts could be effective. The model called for deemphasizing media and public opinion reporting because other agencies and the private sector already provide such reports, eliminating programs in the arts because they do not add to what is being done privately, and reducing grants to highly developed countries. Top management concluded that without such a redesign, Agency inertia would result in merely a downsized status quo, which would not yield an effective overseas structure. However, we were told that top management could not reach a consensus on the model, so as of August 1996, no actions had been taken. USIA has not conducted assessments required to determine if programs that were developed in a different era have outlived their usefulness or if the value added from their involvement is worth the investment. Agency officials believe it is difficult to link a program to a desired result, and existing evidence of impact is largely anecdotal. Such assessments are critical to USIA’s ability to sustain a major reduction and remain viable. One activity that we believe merits such review is USIA’s student advising operation. USIA spends about $2.6 million annually to subsidize more than 400 educational advisory centers worldwide that provide information about the U.S. system of education. Some of these centers are housed in USIA offices and are fully funded by the U.S. government. Others are operated by host country universities or U.S. nonprofit organizations and are partially funded by USIA. An additional $1.4 million is spent for training, materials, and other activities. USIA believes that it is in the best interests of the United States to support student advising because international students spend nearly $7 billion a year in the United States, contributing substantially to the U.S. economy, and American students are introduced to different cultures, enhancing diversity. However, the USIA Inspector General recently concluded that new worldwide trends to internationalize higher education, advancements in communication technology, and the increased sophistication of non-U.S. government-sponsored educational advising institutions indicate that a guidance and oversight role for USIA is more appropriate than an operational one. Overall, the USIA Inspector General believes that the increase in private sector counseling services, coupled with dwindling USIA resources, suggest it is now an appropriate time for USIA to turn over its educational advising role to the private sector. USIA’s English language training program, which originated in 1941, also is a candidate for review. The purpose of this program is to encourage the teaching of English; develop host country institutional capabilities to teach English; directly teach English in Africa, the Near East, and Asia; and design materials to supplement host countries’ classroom texts in more than 140 countries, including Western countries such as Brazil, France, and Mexico. USIA supports the program in part because it allows the United States access to people and institutions that it might otherwise be denied, such as universities in Islamic countries, and provides a better understanding of and appreciation for American culture and values in the curricula of these institutions. Other programs, specific to one or more posts, should also be considered for review if budgets are significantly reduced. For example, a large post like Germany may employ translators and printing facilities and issue post-generated materials whose value would need to be assessed in a time of declining budgets. One example is a publication that contains statements made by U.S. officials on foreign policy questions and is developed and distributed by the USIA post in Germany. USIA data indicate that it maintains more than 70 cultural centers, libraries, and branch offices overseas. Because they may not be co-located with an embassy and require staff to deal directly with the public, they are often expensive to operate. In Germany, for example, the fiscal year 1995 cost to operate six cultural centers (called America Houses) was nearly $9 million, which was for 77 staff and for activities such as reference centers, including online databases; student counseling activities; and cultural events. Lease costs for USIA’s three branch offices in Japan exceed $350,000 a year, and the cost for offices in Milan, Italy, was approximately $137,000. USIA stated that between fiscal years 1992 and 1996 it closed 30 cultural centers, 12 of them in conjunction with decisions to close posts in countries such as Iraq, Somalia, Lesotho, and Guyana. USIA could cut costs by finding alternatives for its cultural centers. For example, the USIA Inspector General believes that binational centers are a cost-effective alternative to cultural centers. Binational centers are private, autonomous institutions established to promote mutual understanding between the United States and host countries. USIA may have only minimal or no funds invested in the centers and may or may not assign staff. USIA successfully encouraged the formation of a binational center when funding limitations forced it to close an America House in Germany. USIA collaborated with private industry and the local German government to establish a German-American Institute to further relations through cultural and educational events. This decision could cut USIA’s annual operating costs in Germany by as much as half a million dollars. Of the 126 existing binational centers, 101 are located in Latin America, 16 in Europe, 4 in the Far East, and 5 in the Middle East. USIA officials said that the Agency has been forced to incur costs for office space when an embassy or a consulate does not have sufficient room available to house USIA. Nevertheless, when available, colocating with the State Department offers a more cost-effective approach. In 1996, USIA plans to terminate a $366,000 annual lease in Seoul, Korea, and a $455,000 annual lease in Singapore and move into embassy facilities in both countries. We were also informed that when USAID leaves the Czech Republic, space may become available for USIA to move onto embassy property and relinquish its lease costing $100,000 a year. If after assessing the impact of its overseas programs USIA believes they remain critical to U.S. foreign policy goals, USIA would still be in a position to reduce costs. USIA would be better able to accommodate reduced funding levels by (1) obtaining further financial support from the private sector, (2) charging fees for more of its products, and (3) ensuring that improved communication technologies are effectively integrated into how USIA delivers information. Historically, USIA has had a close relationship with the private sector, particularly with educational and nonprofit institutions that support international exchanges. Recently, USIA headquarters and posts have sought to interest the private sector in supporting other USIA programs to lessen government costs. For example, the Vice President instructed USIA to seek creative funding arrangements for its overseas counseling activities. In response, USIA entered into an agreement with a private company to assume control of the previously USIA-managed advising center in Singapore. The company will pay all costs, but the center will still carry the USIA name and logo. USIA has sought to maximize its funding by charging for some of its activities, and the Congress authorized USIA to supplement direct appropriations with the proceeds. In fiscal year 1995, USIA yielded $3.5 million from customers of its worldwide English language training program. Posts sell materials to clients to cover shipping, administrative, and markup costs. Posts retain 95 percent of the proceeds to spend on programs like their English training specialists or to help host countries develop institutions to teach English. Five percent of the proceeds are returned to headquarters for support. To a much lesser degree, USIA can recoup some of the costs associated with student advising. All USIA-supported student advisory centers can charge for their services. According to USIA officials, however, USIA posts cannot charge for services or headquarters-supplied products. The only exception is independently developed materials, which in Germany are prepared in German, published, and sold directly to students. USIA determined that as a global information agency in the information age it needed to modernize its communication technology and in fiscal year 1996 embarked on a modernization plan. Its headquarters reorganization supported the integration of new technologies. It was envisioned that by exploiting electronic technology, USIA could close down high overhead public access libraries in Europe and replace them with information centers, and could open up home pages on the Internet for easy access by users. Home pages may include speeches and other statements of U.S. officials, information on country post activities, clips of U.S. news articles, and other information for researchers. While not explicitly stated, the underlying assumption would appear to be that technology would significantly reduce and change USIA’s involvement in supplying information. We observed, however, that some USIA officials in the field appear to view technology more as an addition than as a replacement for their long-standing programs. One official noted, for example, that information centers were opened at the cost of closing traditional libraries that lent out books and other materials. Furthermore, USIA believes that technology does not negate the need to target individuals and institutions they believe are influential in shaping opinions or insuring that the public receive accurate, comprehensive coverage of U.S. policies. USIA officials with whom we spoke believe that technology can never replace the personal contact of USIA officials and can in fact cause an even greater need for USIA. They note that the proliferation of information will require that USIA play an even greater role in explaining and sifting through all the available data. USIA said that no one model can shape all overseas posts, and it is recasting the size, scope, and focus of some of its operations and phasing out separate cultural center operations whenever feasible. Regarding the privatization of its student advising program, USIA stated that it will continue to cultivate private support of student advising, but commented that it is not in the U.S. government’s interest to fully privatize student counseling. USIA also noted that the goals of its English language program—to orient foreign teachers toward American English and materials so students will be more likely to study in the United States rather than other English language countries and to develop ties with the United States—is not generally shared by commercial programs. USIA manages a variety of exchange programs to foster mutual understanding between the people of the United States and other countries. Appendix II provides detailed information about these programs. In fiscal year 1996, these exchanges cost about $210 million plus approximately $29 million to manage them. In recent years funding levels have not permitted USIA to maintain the same number of exchanges it supported in the past. Should funding be further reduced, options to cut costs include (1) eliminating certain exchanges entirely, (2) reducing the amount of funds USIA allocates to each program, or (3) obtaining more financial support from the private sector or foreign governments. The advisability of implementing any or all of these options would need to be evaluated along with the impact such actions might have on U.S. bilateral relationships and on the promotion of ties between private citizens and organizations in the United States and abroad. USIA currently manages some U.S. government exchange programs that have existed for more than 50 years. In 1994, USIA academic exchanges accounted for less than 24 percent of all U.S. government-funded international exchange and training activities. The Agency has evaluated these programs, but the evaluations fell short of assessing the usefulness of the programs for promoting U.S. foreign policy objectives. Whether the federal government still needs to fund each exchange, whether the exchange is targeted at the most appropriate countries, whether it is unique and unavailable from the private sector, and whether it is effective are questions requiring review should the budget for exchanges be significantly cut. USIA manages a variety of academic scholarships. The best known is the Fulbright Academic Exchange program, which involves the exchange of about 4,700 U.S. and foreign students, research scholars, lecturers, and teachers annually. Additionally, USIA manages other academic exchanges such as the Hubert H. Humphrey Program, under which mid-career professionals from developing countries receive a year of specially designed academic study and professional internships, an undergraduate exchange for economically disadvantaged Central American students, and the Edmund S. Muskie Fellowship Program for mid-career professionals and qualified students in the newly independent states to do graduate study in the United States in the fields of business administration, economics, law, and public administration. When the Fulbright Program was established, few foreign students came to the United States for studies. Clearly one objective of the program was to provide funds to spur such interchange, and this was borne out in subsequent years. According to the Fulbright Foreign Scholarship Board, in 1948, 500 British graduate students were attending graduate studies in the United States, and of these, 200 were under the Fulbright program. Since the United States first began funding scholarships, conditions have changed. The number of foreign students in the United States reporting personal or private resources as their primary source of funding has increased markedly. In 1950, 7.7 percent of foreign students in the United States reported the U.S. federal government as their primary source of funding. In 1994, about 453,000 foreign students were studying in U.S. colleges and universities. In the 1994/95 academic year, only about 1.2 percent, or about 5,400 students received funding from the U.S. government as their primary source of support. The Fulbright Program accounted for about 1,600 of the students receiving U.S. government funds. Similarly, the number of U.S. students abroad has increased dramatically. In 1969, approximately 18,000 U.S. students were studying abroad. During the 1993-94 school year, more than 76,000 U.S. students attended foreign educational institutions. Some officials believe USIA could decrease funding for exchanges with western industrialized countries. One USIA official argued that in times of severe budget reductions, the Agency should direct more of its resources to those countries with the greatest potential to disrupt the world order. A State Department official asserted that USIA should concentrate more of its programs in Eastern Europe and the former Soviet Union where information on the United States is lacking. Another State official asserted that USIA should operate in poorer countries where social, political, and cultural development is in the beginning stages. Although Western Europe accounts for approximately 20 percent of all exchange participants, some European and other industrialized countries receive relatively large programs. The concentration of exchange programs in some of these countries began shortly after World War II. For example, the U.S. government began exchanges with Germany in 1945 as part of a larger effort to assist Germans in creating a new society modeled on western democratic concepts. The U.S. government also engaged in democratization efforts in Japan. The exchanges initiated under these efforts evolved into the Fulbright Program and other USIA exchanges. Germany and Japan have become modern democracies with large economies, and many students from these countries study in the United States. In 1994, Japan, with over 45,000 students studying in the United States, was the leading country of origin for foreign students. Germany was the leading country of origin in Europe for foreign students studying in the United States. In the 1994-95 school year there were 8,500 such students. Despite the large number of German and Japanese students studying in the United States on their own, USIA still maintains a large program of exchanges in both countries. USIA’s program with Germany ranks second of 187 countries in which USIA has programs. In 1994, 2,481 Germans traveled to the United States and 861 Americans traveled to Germany on USIA exchange programs. USIA’s program with Japan is the 14th largest in terms of participants. In 1994, 305 Japanese traveled to the United States and 125 Americans traveled to Japan on USIA exchanges. USIA and U.S. embassy officials asserted that maintaining a large exchange program with Germany is still important. According to USIA officials, exchanges with Germany help the United States maintain the strong relationship between the United States and Germany and reach former East German citizens who have not had any experience with democratic principles. Both the U.S. Ambassador and public affairs officer in Germany said it was important to maintain a good relationship with Germany because it is emerging as the leading economic power in Europe. The Ambassador also asserted that the United States cannot rely on past relationships to maintain its influence in the region. USIA has made similar assertions about the need to maintain and nurture its relationship with Japan. Nevertheless, there is disagreement within USIA over the level and appropriate share of exchanges with such countries. One USIA official, for example, believes that the number of exchanges with Germany is unnecessary because of the flow of exchanges funded by other sources. Another official offered that if private sector exchanges are occurring without USIA, USIA exchanges could be merely duplicating private sector efforts. USIA and embassy officials with whom we met believe that the international visitor and citizens exchange programs are particularly important because they are directly related to U.S. mission goals. These programs have little or no parallels in the private sector. As a result, some have suggested that these should be the last exchanges targeted for cuts. Furthermore, USIA officials also informed us that those academic exchanges that involve government-to-government agreements may also directly support U.S. objectives and should be maintained. For example, the officials told us that the International Visitors Program can be linked to U.S. foreign policy goals more readily than long-term academic programs because the visitors’ exchanges are directly tied to USIA goals as described in the posts’ country plans. Furthermore, participants are selected by post officials, including the ambassador and the public affairs officer. To illustrate, USIA planned an International Visitor Program for a group of German government and private sector officials around the theme of foreign policy challenges facing western nations. The USIA post especially wanted to target persons from former East Germany who had little contact with the United States but whose views would be important in shaping future German policy. In commenting on the program, one participant, who was an official of the youth wing of a political party in the eastern states, informed USIA that the trip gave him a much more favorable attitude about the United States. His activities included a week in Washington, D.C., to meet with staff and representatives from the federal government, academia, lobbying organizations, and think tanks; another week in Miami, Florida, to explore the foreign policy implications of Cuban and Haitian refugees and immigration through meetings with journalists, local business people, and others; an academic seminar in Lincoln, Nebraska, to discuss a variety of topics; a stop in San Jose, California, to obtain information on current trade issues and military downsizing; and a few days in New York City, New York, to meet with individuals from a variety of organizations to discuss human rights issues. USIA officials in Bonn, Germany, estimated that this particular exchange cost $12,658. Officials in Washington and the U.S. ambassador to Germany described the International Visitors Program as a vital tool in achieving U.S. foreign policy goals. A private sector official who manages private as well as USIA exchanges said the private sector rarely offers exchanges to political leaders. Though the private sector could and sometimes does conduct professional visitor exchanges, these exchanges are based on economic needs, not U.S. foreign policy considerations. USIA can also use the Citizens Exchange Program to meet more immediate foreign policy needs. For example, USIA funded a number of citizens exchange grants focusing on public administration and local government development, business administration/management training, economic and educational reform, rule of law, and elections. These exchanges have taken place with citizens from a variety of countries such as those in Central and Eastern Europe, the newly independent states, and South Africa. On the other hand, the U.S. Ambassador to Germany gave more importance to the Fulbright and the Congress-Bundestag youth exchange programs than other USIA exchange programs because they are bilateral. Members of the German parliament with whom we met stated that U.S. government support of the exchanges proves its commitment to Germany. A senior advisor to the Parliament stated that Germany contributes more to these two exchange programs than the United States does and would view any decrease in U.S. support as a symbol of disengagement. In fiscal year 1995, Germany contributed $5.4 million to the Fulbright Program, while the U.S. government contributed $2.9 million. Table 4.1 shows U.S. and partner contributions to the Fulbright Program. In 1995, USIA had active bilateral agreements with 50 of the 148 countries participating in the Fulbright program. These executive agreements establish binational commissions to administer the exchanges and commit both parties to support the program. USIA and others assert that the bilateral nature of the Fulbright Program makes it relevant to foreign policy and distinguishes it from other private and U.S.-funded academic exchange programs despite the small number of students it supports relative to the total studying in the United States. For example, one of the binational commissions stated that the unique binational structure, governed and financed by representatives of both contracting countries, makes the program special among private and government programs. One option to reduce costs is to eliminate or reduce exchanges with the least impact. For example, some officials believe that high school exchanges should not be funded when budgets are declining because they are more time-consuming and expensive, and have less immediate impact than other exchanges. However, USIA has little data on the impact of exchanges that could be used to identify less effective programs. As indicated in our June 1993 report the Agency had devoted few resources to evaluating the effectiveness and relative importance of its programs because USIA believes exchange programs are inherently beneficial and achieve foreign policy goals by promoting mutual understanding, as stated in their enabling legislation. Past USIA evaluations were based mainly on anecdotal information, according to a USIA official in the Office of Policy and Evaluation. In recent years, officials involved in advising or managing exchange programs have expressed the need for more evaluation. Furthermore, in 1995, the U.S. Advisory Commission on Public Diplomacy concluded that the United States lacks a strategic justification for federally funded exchanges. USIA acknowledged the need for more evaluations and in 1992 established the Office of Policy and Evaluation. However, the office’s evaluations do not measure the impact in terms of foreign policy goals. Instead, the more than 60 studies conducted attempted to measure the skills or knowledge the participants acquire as a result of the exchange and their use of those skills after they return to their countries. USIA officials have asserted that evaluating the impact of exchange programs is difficult. Its present approach of assessing the skills and knowledge the participants acquire as a result of the exchanges is not useful for deciding which programs are the most successful in promoting U.S. foreign policy goals. According to an Office of Policy and Evaluation official, USIA is developing criteria and a methodology to better measure the degree to which exchanges are linked to foreign policy goals and objectives at a broad level for example, to U.S. political and economic security. He said the office’s goal is to develop evaluations that will support efforts to prioritize programs. In addition to scaling back or eliminating specific exchanges, USIA can seek increased private sector and foreign government support to offset possible budget cuts. In fiscal year 1995, USIA received approximately $109 million in direct financial and other support from the private sector for exchanges. For example, private sector support for Fulbright students ranges from tuition waivers to endowments to airline tickets. The Institute of International Education, one of the organizations that manages the Fulbright student competitions for USIA, also solicits private sector support and in 1994 raised about $10 million for Fulbright students. An Institute official stated that the organization conducts the fund-raising efforts on its own initiative. He believes his organization could double the contributions from the private sector if USIA provided a small amount of resources for fund-raising activities. Furthermore, the Fulbright Foreign Scholarship Board has urged the binational commissions to increase their efforts to raise funds from the private sector within the countries they represent. The International Visitors Program, the Citizens Exchange Program, and the Arts America Program also receive support from the private sector. Support for the International Visitors Program is provided by 102 community-based voluntary organizations in 42 states generally referred to as Councils of International Visitors. Although USIA has a budget of about $1.5 million to support the councils, the councils raise most of the money they need to fund their activities. Staff and volunteers from the councils arrange the local appointments for the visitors, accompany them, and arrange for home visits. During the home visits, volunteers arrange social events for the visitors and take them sightseeing. The home visits are often the highlight of the program for the visitors. For example, a visitor from Eastern Germany stayed for 3 days with a farm family in Nebraska, where he said he encountered the genuine America. Citizens Exchanges and Arts America Programs’ requests for proposals contain cost-sharing provisions requiring that organizations seeking grants solicit other private sector support. The Arts America Program also solicits funds from foundations and corporations. Further, the program incorporates free commercial cultural activities whenever possible. For example, if a private organization funds a musical or theatrical group’s trip to Brazil, USIA may solicit a free performance for its invited guests or fund the group’s performance in a neighboring country. At the same time, USIA officials caution that opportunities for the private sector to assume total responsibility for USIA exchange programs may be limited. For example, USIA officials in Germany assert that an increased role for the private sector is not viable if a program such as the International Visitors Program is designed to meet immediate foreign policy goals, or if a program is conducted under a bilateral agreement. They explained that the private sector cannot make commitments for the U.S. government. Further, they said foreign governments often prefer government-to-government relationships. Some former German international visitors said they did not believe the private sector would be able to arrange the same high-level meetings as the government. One participant from eastern Germany said he would not have participated in a program sponsored by the private sector because it might have commercial purposes. USIA could reduce its costs by requiring more support from partner foreign governments. One of the goals of the Fulbright Program is that each binational partner provide an equal amount of the support, but this goal has not been met. Although most of the 50 partner nations in the Fulbright Program provide program support, their contribution falls far short of matching that of the United States. In fiscal year 1995, for every dollar the USIA spent on the Fulbright Program, the foreign governments spent 19 cents. Several binational partners (Austria, Belgium/Luxembourg, Denmark, Finland, Germany, Japan, Morocco, the Netherlands, Norway, Spain, and Sweden), contributed more to the program than the United States in fiscal year 1995. Table 4.2 shows fiscal year 1995 funding and foreign government and private sector support for each of the exchange programs. A USIA official stated that the Agency’s efforts to encourage the partner nations to increase their contributions have resulted in increased contributions. For example, he said Italy recently made a commitment to double its contribution. The Philippines and Turkey have also made commitments to increase their support. The USIA official further stated that, although parity is the general goal, some binational agreements make no reference to equal funding. It depends on the resources of the country. He cautioned that it could be difficult for USIA to demand increases from the partner nations when the Agency is facing a substantial budget reduction and will have to decrease its own contribution. USIA agreed that exchange programs should not be concentrated in areas such as Western Europe, where non-USIA exchange opportunities are plentiful. As such, USIA indicated that it had been shifting resources to exchange programs in regions that are not as fully represented by other U.S. government agencies or the private sector. Additionally, USIA stated that (1) it continues to curtail or eliminate programs that must be sacrificed to address foreign policy priorities; (2) it has moved resources from Central America to Eastern Europe and the newly independent states of the former Soviet Union; and (3) it encourages host government and private sector cost-sharing of academic exchange programs and has been able to raise the level of funding from partner countries. The United States broadcasts over 1,600 hours of radio programming in 53 languages and over 400 hours of television in several languages worldwide each week to support U.S. foreign policy objectives. Appendix III explains each service and provides detailed information on cost, staff, broadcast hours, and audience. U.S. international broadcasting has undergone significant downsizing and restructuring. Nevertheless, a number of issues such as language priorities, the proper mix of television and radio, and the role of the private sector must be fully addressed to ensure the best use of limited, and possibly declining, funds. We offer some examples of the type of cuts the Broadcasting Board of Governors (BBG) could choose to implement if funding is reduced significantly. In the U.S. International Broadcasting Act of 1994, the Congress clearly tied international broadcasting to U.S. foreign policy objectives and reaffirmed the importance of continuing U.S. broadcasts to further U.S. interests. This legislation also directed the creation of a broadcasting service, in addition to VOA, to the People’s Republic of China and other Asian countries that lack adequate sources of free information. To better coordinate programs and ensure adequate oversight, the legislation consolidated all nonmilitary international broadcasting under the bipartisan BBG, which was established within USIA and confirmed on August 11, 1995, and met for the first time on September 6, 1995. The BBG is responsible for ensuring that broadcasts are consistent with broad U.S. foreign policy objectives as well as international telecommunications policies and treaty obligations; do not duplicate U.S. private or other democratic nations’ broadcast reflect the highest professional standards of broadcast journalism, including providing reliable information; and are designed to reach a significant audience. BBG staff told us that they will review all U.S.-sponsored international broadcast entities and their various services to ensure that they meet the U.S. standards and principles. Eliminating a language service or broadcast hours offers an immediate reduction in costs. For example, terminating RFE/RL’s broadcasts in Romanian could cut costs by over $2 million; terminating RFE/RL’s or VOA’s Russian service could reduce annual funding by $7.8 million and $5 million, respectively. However, making decisions to cut languages or services is difficult because the consolidation of VOA and RFE/RL did not resolve questions regarding the relative importance and priority of the various languages and the appropriate mix of television and radio. Furthermore, the BBG has yet to develop a worldwide broadcasting strategy to address these issues. The United States broadcasts in many languages for a variety of reasons. USIA agrees that some languages are clearly more important to immediate U.S. interests than others. Some language services are maintained because of interests of the Congress or the National Security Council. For example, in 1990 VOA proposed eliminating six language services, Greek, Uzbek, Turkish, Slovene, Swahili, and Lao, but decided to continue the services due to congressional and other interest. Also, there is a belief that maintaining low priority services is “insurance” against a time when those languages may become high priority. For example, VOA eliminated its Farsi language service in 1966 when Iran was a strong regional ally of the United States. After the fall of the Shah and the U.S. hostage crisis, VOA reinitiated a Farsi service in April 1979, but according to VOA officials it took years to develop a high-quality service and to rebuild a listening audience. More recently, Creole was deemed a low-priority language service. However, with U.S. intervention in Haiti, the language took on a very high priority. Therefore, the cost savings from eliminating lower priority language services must be measured against the risk of needing those language services in the future. The BBG is developing a plan to extensively review all language services and broadcast entities to determine their continued need and effectiveness. However, the BBG has not addressed the problem we noted in 1992 concerning the lack of timely and specific research data. We concluded that the lack of data hampered VOA’s ability to make effective decisions on program content and resource allocations. Although extensive audience research is being conducted in Eastern Europe and the former Soviet Union, research is still lacking in other parts of the world. For example, an August 1994 report by the Office of Strategic Planning on the media climate in the Middle East and South Asia concluded that the “data on VOA listenership in the Middle East are uneven and largely out of date.” To date, the BBG has reviewed the Amharic service to Ethiopia after receiving congressional complaints regarding the service. It then replaced the Amharic service with a new “Horn of Africa” service, which includes two other languages in addition to Amharic to better reach more people in that region of Africa. The BBG considered input from both the National Security Council and congressional staff, as well as the best available audience research data, which came from the British Broadcasting Corporation. With tight budget constraints, difficult decisions may have to be made regarding television and radio broadcasts. Television is significantly more expensive than radio but is rapidly becoming the dominant broadcast medium throughout the world. The number of television sets worldwide is growing not only in industrialized western countries but also in China and developing countries. For this reason, the United States Advisory Commission on Public Diplomacy has recommended changing the mix of radio and television, with more funding for television. USIA research also supports reducing shortwave broadcasting and increasing television programming in many countries in Eastern Europe and the former Soviet Union. BBG staff informed us that they will be addressing this issue. VOA is experimenting with providing some television programming in cooperation with Worldnet. VOA’s Mandarin service has initiated a new program, China Forum, which is a weekly panel discussion simultaneously broadcast live by radio and television. The quality of the television is not up to Worldnet’s standards; nonetheless, judging by the call-in segment of the program, it is popular since over 20 percent of the callers are watching the program. Worldnet, using VOA broadcasters, produces a half-hour weekly television program in Ukrainian, Window on America. In a nationwide survey conducted in the Ukraine in the spring of 1995, about 66 percent of the respondents said they had heard of Window on America, one-half said they had watched the program at least once during the previous 2 months, and 6 percent had watched it every week during the last 2 months. This is comparable to weekly and occasional listening to VOA or RFE/RL radio programming. High-quality television, such as Worldnet’s programming and TV Marti, is significantly more expensive than radio. Furthermore, USIA does not have as direct control over television transmission as it does its shortwave radio. Regarding the cost issue, Window on America, a half-hour weekly program, costs about $790,000 annually to produce, whereas 2 hours of daily VOA Ukrainian radio broadcasting costs $1.3 million annually to produce. In the case of Radio and TV Marti, 24 daily hours of radio cost about $15 million, and 4-1/2 daily hours of television cost $13.3 million in 1995. Regarding control, most viewers access U.S. television programs from local television stations or local cable systems. Whoever controls the local cable system or station can censor or terminate the broadcasts. Few potential viewers have, or have access to, satellite dishes to receive the Worldnet signal directly. In the U.S. International Broadcasting Act of 1994, the Congress expressed the sense that RFE/RL would no longer receive federal funding after December 31, 1999. VOA is attempting to get private sector support for its programming and in some cases divest itself of the programs, such as VOA Europe and the Latin American Service. At this time, however, the broadcasters have met with limited success in interesting the private sector in assuming the costs and responsibilities for international broadcasting. RFE/RL determined that both Poland and the Czech Republic least required a U.S. surrogate radio station and offered the best potential for private support. At this time, the prospects that these spin-offs will actually occur is uncertain. RFE/RL had hoped that they could discontinue funding for the services by the end of calendar year 1995. However, RFE/RL estimated that for fiscal year 1996, it would still have to fund the services at a cost of $248,000 and $986,000, respectively. VOA has also had little success privatizing existing programs or language services. One potential partner for the Latin American service cited, among other concerns, the lack of an adequate market assessment of listeners. VOA officials acknowledge shortfalls in audience research but cite the cost associated with worldwide research. VOA and RFE/RL officials still hope that greater private sector support is attainable, but it does not appear that, at least in the near term, the private sector can replace the U.S. government as the prime supporter of this type of international broadcasting. Broadcast officials are concerned that several of their services, such as those services aimed at closed societies, will not be commercially profitable or in any way able to attract private sector support for some time. For example, VOA’s attempts to persuade a large corporation to fund broadcasts to China were unsuccessful, partly because the corporation is concerned about the Chinese government’s reaction to its funding VOA broadcasts. VOA has received some prizes and other promotional items from private businesses, but to date, such private support totals less than 1 percent of its appropriation. We identified areas where the study of existing overlap could yield management improvements and cost reductions. Further consolidation of broadcasting assets, such as newsrooms, overseas bureaus, and other offices, may result in additional cost reductions. For example, in fiscal year 1995 RFE/RL spent $919,000 on its Moscow bureau and just over $1.5 million on freelancers for the Russian service, while VOA spent about $725,000 on its Moscow bureau and $45,000 on freelancers in Russia. Also, program review functions could be consolidated. Currently, one office reviews all VOA, Worldnet, and Marti programs, while RFE/RL is developing its own program review capability to replace an office that was eliminated as part of its downsizing. Greater emphasis on placement of radio programs on local AM and FM radio stations has already reduced transmission costs while increasing audience levels. For example, VOA no longer broadcasts to the Baltic states by shortwave; rather, broadcasts are sent by satellite for placement on the respective national radio networks and some private radio stations. Elimination of shortwave broadcasting significantly reduces transmission costs, and audience survey data revealed that placement on local AM and FM stations increases the number of listeners because the quality and accessibility of the signal are better. For example, a listener survey in three cities in Senegal indicates a 4 to 7 percent listener rate for shortwave broadcasts of VOA French to Africa but 22 to 34 percent listener rates for the affiliate broadcasts. Terminating VOA’s English broadcasts to Asia on shortwave and mediumwave could reduce expenditures up to $2.5 million for transmission services. The trade-off to eliminating shortwave broadcasts is that in times of crisis, hostile forces could terminate the local broadcasts since the signal emanates from a location over which the United States has no control. Therefore, in many countries, such as Russia, VOA and RFE/RL have local affiliates that broadcast their programs, but they also continue to broadcast the programs directly over shortwave radio. VOA’s 6 daily hours of direct broadcasting to Russia cost almost $1 million annually to transmit. The Broadcasting Board of Governors characterized the report as welcome and timely and noted that GAO’s observations as well as options to reduce costs have been and will continue to be reviewed by the Board. The Congress uses the USIA budget as a means to transfer federal funds to certain grantees. Any USIA effort to curb spending cannot unilaterally include reductions to these grants. About $43.7 million, or 4 percent, of the fiscal year 1996 budget is for grants to the National Endowment for Democracy, the East-West Center, and the North/South Center. The National Endowment for Democracy was established in 1983 as a private, nonprofit organization to encourage free and democratic institutions throughout the world and promote U.S. nongovernmental participation through private sector initiatives. The Endowment received $34 million in fiscal year 1995 and $30 million in fiscal year 1996. By law, the Endowment is a grant-making organization only and cannot carry out programs directly. It provides grants primarily to the Center for International Private Enterprise, the Free Trade Union Institute, the International Republican Institute, and the National Democratic Institute. An example of how the Endowment’s grant funds are used is the Free Trade Union Institute’s trade union development activities in Russia, Hungary, and Romania. The East-West Center is an international educational institution established in 1960 to promote better relations and understanding among the nations of Asia, the Pacific, and the United States through cooperative study, training, and research. The Center conducts research, seminars, and workshops and supports undergraduate and graduate education. For fiscal year 1995, the Center received $24.5 million, but the Congress reduced funding to $11.7 million for fiscal year 1996. The North-South Center is a national educational institute closely affiliated with the University of Miami. Its mission is to promote better relations, commerce, and understanding among the nations of North America, South America, and the Caribbean. The Center began receiving a federal grant in 1991. Since that time, funding has been sharply reduced. The Center received $10 million in fiscal year 1991, $4 million in fiscal year 1995, and $2 million for fiscal year 1996.
|
Pursuant to a congressional request, GAO reviewed the U.S. Information Agency's (USIA) reform and cost-cutting efforts and options that could enable USIA to adjust to reduced budgets. GAO found that: (1) USIA believes that reaching out to foreign publics and telling America's story remains critical to U.S. foreign policy goals, and agency officials believe that further significant reductions could greatly hamper USIA's mission; (2) USIA believes it has undergone an extensive reorganization and downsizing, responsive to both U.S. foreign policy priorities and needs, as well as budget constraints; (3) new fiscal realities may force USIA to make additional choices about resource priorities and eliminate certain programs or locations of activities; (4) GAO believes that USIA could take steps to further reduce its costs, while continuing to protect U.S. interests, if fiscal conditions require; (5) to sustain a major reduction, USIA may have to consider closing more posts than it presently plans in countries where USIA has determined that the United States has limited public diplomacy goals; (6) another option would be to reconfigure USIA's overseas presence, which is currently based on a structure established after World War II; (7) Congress has already scaled back funding for some exchanges, but eliminating one or more exchanges, which would require Congress' approval, is also an option to reduce costs; (8) USIA exchanges permit the U.S. government to target potential leaders overseas, and consideration should be given to the potential impact that cutting exchanges would have on bilateral relationships with foreign countries; (9) soliciting increased foreign government and private-sector support is also an option to lessen USIA's costs; (10) Congress has reduced funding for all nonmilitary international broadcasting activities and mandated their consolidation; (11) modest economies are possible by eliminating overlap among broadcasters; and (12) any substantial funding cuts, however, would require major changes to the number of language services and broadcast hours, and past experience has shown that eliminating even one language is a difficult process, requiring concurrence from a wide range of interest groups and members of Congress.
|
As the primary federal agency that is responsible for protecting and securing GSA facilities and federal employees across the country, FPS has the authority to enforce federal laws and regulations aimed at protecting federally owned and leased properties and the persons on such property, and, among other things, to conduct investigations related to offenses against the property and persons on the property. To protect the over one million federal employees and about 9,000 GSA facilities from the risk of terrorist and criminal attacks, in fiscal year 2007, FPS had about 1,100 employees, of which 541, or almost 50 percent, were inspectors. FPS inspectors are primarily responsible for responding to incidents and demonstrations, overseeing contract guards, completing BSAs for numerous buildings, and participating in tenant agencies’ BSC meetings. About 215, or 19 percent, of FPS’s employees are police officers who are primarily responsible for patrolling GSA facilities, responding to criminal incidents, assisting in the monitoring of contract guards, responding to demonstrations at GSA facilities, and conducting basic criminal investigations. About 104, or 9 percent, of FPS’s 1,100 employees are special agents who are the lead entity within FPS for gathering intelligence for criminal and anti-terrorist activities, and planning and conducting investigations relating to alleged or suspected violations of criminal laws against GSA facilities and their occupants. FPS also has about 15,000 contract guards that are used primarily to monitor facilities through fixed post assignments and access control. According to FPS policy documents, contract guards may detain individuals who are being seriously disruptive, violent, or suspected of committing a crime at a GSA facility, but do not have arrest authority. The level of law enforcement and physical protection services FPS provides at each of the approximately 9,000 GSA facilities varies depending on the facility’s security level. To determine a facility’s security level, FPS uses the Department of Justice’s (DOJ) Vulnerability Assessment Guidelines which are summarized below. A level I facility has 10 or fewer federal employees, 2,500 or fewer square feet of office space and a low volume of public contact or contact with only a small segment of the population. A typical level I facility is a small storefront-type operation, such as a military recruiting office. A level II facility has between 11 and 150 federal employees, more than 2,500 to 80,000 square feet; a moderate volume of public contact; and federal activities that are routine in nature, similar to commercial activities. A level III facility has between 151 and 450 federal employees, more than 80,000 to 150,000 square feet and a moderate to high volume of public contact. A level IV facility has over 450 federal employees, more than 150,000 square feet; a high volume of public contact; and tenant agencies that may include high-risk law enforcement and intelligence agencies, courts, judicial offices, and highly sensitive government records. A Level V facility is similar to a Level IV facility in terms of the number of employees and square footage, but contains mission functions critical to national security. FPS does not have responsibility for protecting any level V buildings. FPS is a reimbursable organization and is funded by collecting security fees from tenant agencies, referred to as a fee-based system. To fund its operations, FPS charges each tenant agency a basic security fee per square foot of space occupied in a GSA facility. In 2008, the basic security fee is 62 cents per square foot and covers services such as patrol, monitoring of building perimeter alarms and dispatching of law enforcement response through its control centers, criminal investigations, and BSAs. FPS also collects an administrative fee it charges tenant agencies for building specific security services such as access control to facilities’ entrances and exits, employee and visitor checks; and the purchase, installation, and maintenance of security equipment including cameras, alarms, magnetometers, and x-ray machines. In addition to these security services, FPS provides agencies with additional services upon request, which are funded through reimbursable Security Work Authorizations (SWA), for which FPS charges an administrative fee. For example, agencies may request additional magnetometers or more advanced perimeter surveillance capabilities. FPS faces several operational challenges, including decreasing staff levels, which has led to reductions in the law enforcement services that FPS provides. FPS also faces challenges in overseeing its contract guards, completing its BSAs in a timely manner, and maintaining security countermeasures. While FPS has taken steps to address these challenges, it has not fully resolved them. Providing law enforcement and physical security services to GSA facilities is inherently labor intensive and requires effective management of available staffing resources. However, since transferring from GSA to DHS, FPS’s staff has declined and the agency has managed its staffing resources in a manner that has reduced security at GSA facilities and may increase the risk of crime or terrorist attacks at many GSA facilities. Specifically, FPS’s staff has decreased by about 20 percent from almost 1,400 employees at the end of fiscal year 2004, to about 1,100 employees at the end of fiscal year 2007, as shown in figure 1. In fiscal year 2008, FPS initially planned to reduce its staff further. However, a provision in the 2008 Consolidated Appropriations Act requires FPS to increase its staff to 1,200 by July 31, 2008. In fiscal year 2010, FPS plans to increase its staff to 1,450, according to its Director. From fiscal year 2004 to 2007, the number of employees in each position also decreased, with the largest decrease occurring in the police officer position. For example, the number of police officers decreased from 359 in fiscal year 2004 to 215 in fiscal year 2007 and the number of inspectors decreased from 600 in fiscal year 2004 to 541 at the end of fiscal year 2007, as shown in figure 2. At many facilities, FPS has eliminated proactive patrol of GSA facilities to prevent or detect criminal violations. The FPS Policy Handbook states that patrol should be used to prevent crime and terrorist attacks. The elimination of proactive patrol has a negative effect on security at GSA facilities because law enforcement personnel cannot effectively monitor individuals who might be surveilling federal buildings, inspect suspicious vehicles (including potential vehicles for bombing federal buildings), and detect and deter criminal activity in and around federal buildings. While the number of contract guards employed in GSA facilities will not decrease and according to a FPS policy document, the guards are authorized to detain individuals, most are stationed at fixed posts from which they are not permitted to leave and do not have arrest authority. According to some regional officials, some contract guards do not exercise their detention authority because of liability concerns. According to several inspectors and police officers in one FPS region, proactive patrol is important in their region because, in the span of one year, there were 72 homicides within 3 blocks of a major federal office building and because most of the crime in their area takes place after hours when there are no FPS personnel on duty. In addition, FPS officials at several regions we visited said that proactive patrol has, in the past, allowed its police officers and inspectors to identify and apprehend individuals that were surveilling GSA facilities. In contrast, when FPS is not able to patrol federal buildings, there is increased potential for illegal entry and other criminal activity at federal buildings. For example, in one city we visited, a deceased individual had been found in a vacant GSA facility that was not regularly patrolled by FPS. FPS officials stated that the deceased individual had been inside the building for approximately three months. In addition, more recently, at this same facility, two individuals who fled into the facility after being pursued by the local police department for an armed robbery were subsequently apprehended and arrested by the local police department. While the local police department contacted FPS for assistance with responding to the incident at the federal facility, FPS inspectors were advised by senior FPS supervisors not to assist the local police department in their search for the suspects because GSA had not paid the security fee for the facility. In addition to eliminating proactive patrol, many FPS regions have reduced their hours of operation for providing law enforcement services in multiple locations, which has resulted in a lack of coverage when most federal employees are either entering or leaving federal buildings or on weekends when some facilities remain open to the public. Moreover, FPS police officers and inspectors in two cities explained that this lack of coverage has left some federal day care facilities vulnerable to loitering by homeless individuals and drug users. The decrease in FPS’s duty hours has also jeopardized police officer and inspector safety, as well as building security. Some FPS police officers and inspectors said that they are frequently in dangerous situations without any FPS backup because many FPS regions have reduced their hours of operation and overtime. Contract guard inspections are important for several reasons, including ensuring that guards comply with contract requirements; have up-to-date certifications for required training, including firearms or cardiopulmonary resuscitation, and that they perform assigned duties. While FPS policy does not specify how frequently guard posts should be inspected, we found that some posts are inspected less than once per year, in part, because contract guards are often posted in buildings hours or days away from the nearest FPS inspector. For example, one area supervisor reported guard posts that had not been inspected in 18 months while another reported posts that had not been inspected in over one year. In another region, FPS inspectors and police officers reported that managers told them to complete guard inspections over the telephone, instead of in person. In addition, when inspectors do perform guard inspections they do not visit the post during each shift; consequently some guard shifts may never be inspected by an FPS official. As a result, some guards may be supervised exclusively by a representative of the contract guard company. Moreover, in one area we visited with a large FPS presence, officials reported difficulty in getting to every post within that region’s required one month period. We obtained a copy of a contract guard inspection schedule in one metropolitan city that showed 20 of 68 post inspections were completed for the month. Some tenant agencies have also noticed a decline in the level of guard oversight in recent years and believe this has led to poor performance on the part of some contract guards. For example, according to Federal Bureau of Investigation (FBI) and GSA officials in one of the regions we visited, contract guards failed to report the theft of an FBI surveillance trailer worth over $500,000, even though security cameras captured the trailer being stolen while guards were on duty. The FBI did not realize it was missing until three days later. Only after the FBI started making inquiries did the guards report the theft to FPS and the FBI. During another incident, FPS officials reported contract guards—who were armed—taking no action as a shirtless suspect wearing handcuffs on one arm ran through the lobby of a major federal building while being chased by an FPS inspector. In addition, one official reported that during an off- hours alarm call to a federal building, the official arrived to find the front guard post empty while the guard’s loaded firearm was left unattended in the unlocked post. We also personally witnessed an incident in which an individual attempted to enter a level IV facility with illegal weapons. According to FPS policies, contract guards are required to confiscate illegal weapons, detain and question the individual, and to notify FPS. In this instance, the weapons were not confiscated, the individual was not detained or questioned, FPS was not notified, and the individual was allowed to leave with the weapons. We will shortly begin a comprehensive review of FPS’s contract guard program for this Subcommittee and other congressional committees. Building security assessments, which are completed by both inspectors and physical security specialists, are the core component of FPS’s physical security mission. However, ensuring the quality and timeliness of them is an area in which FPS continues to face challenges. The majority of inspectors in the seven regions we visited stated that they are not provided sufficient time to complete BSAs. For example, while FPS officials have stated that BSAs for level IV facilities should take between two to four weeks to complete, several inspectors reported having only one or two days to complete assessments for their buildings. They reported that this was due to pressure from supervisors to complete BSAs as quickly as possible. For example, one region is attempting to complete more than 100 BSAs by June 30, 2008, three months earlier than required, because staff will be needed to assist with a large political event in the region. In addition, one inspector in this region reported having one day to complete site work for six BSAs in a rural state in the region. Some regional supervisors have also found problems with the accuracy of BSAs. One regional supervisor reported that an inspector was repeatedly counseled and required to redo BSAs when supervisors found he was copying and pasting from previous BSAs. Similarly, one regional supervisor stated that, in the course of reviewing a BSA for an address he had personally visited, he realized that the inspector completing the BSA falsified information and had not actually visited the site because the inspector referred to a large building when the actual site was a vacant plot of land owned by GSA. In December 2007, the Director of FPS issued a memorandum emphasizing the importance of conducting BSAs in an ethical manner. FPS’s ability to ensure the quality and timeliness of BSAs is also complicated by challenges with the current risk assessment tool it uses to conduct BSAs, the Federal Security Risk Manager system. We have previously reported that there are three primary concerns with this system. First, it does not allow FPS to compare risks from building to building so that security improvements to buildings can be prioritized. Second, current risk assessments need to be categorized more precisely. According to FPS, too many BSAs are categorized as high or low, which does not allow for a refined prioritization of security improvements. Third, the system does not allow for tracking the implementation status of security recommendations based on assessments. According to FPS, GSA, and tenant agency officials in the regions we visited, some of the security countermeasures, such as security cameras, magnetometers, and X-ray machines at some facilities, as well as some FPS radios and BSA equipment, have been broken for months or years and are poorly maintained. At one level IV facility, FPS and GSA officials stated that 11 of 150 security cameras were fully functional and able to record images. Similarly, at another level IV facility, a large camera project designed to expand and enhance an existing camera system was put on hold because FPS did not have the funds to complete the project. FPS officials stated that broken cameras and other security equipment can negate the deterrent effect of these countermeasures as well as eliminate their usefulness as an investigative tool. For example, according to FPS, it has investigated significant crimes at multiple level IV facilities, but some of the security cameras installed in those buildings were not working properly, preventing FPS investigators from identifying the suspects. Complicating this issue, FPS officials, GSA officials, and tenant representatives stated that additional countermeasures are difficult to implement because they require approval from BSCs, which are composed of representatives from each tenant agency who generally are not security professionals. In some of the buildings that we visited, security countermeasures were not implemented because BSC members cannot agree on what countermeasures to implement or are unable to obtain funding from their agencies. For example, a FPS official in a major metropolitan city stated that over the last 4 years inspectors have recommended 24-hour contract guard coverage at one high-risk building located in a high crime area multiple times, however, the BSC is not able to obtain approval from all its members. In addition, several FPS inspectors stated that their regional managers have instructed them not to recommend security countermeasures in BSAs if FPS would be responsible for funding the measures because there is not sufficient money in regional budgets to purchase and maintain the security equipment. According to FPS, it has a number of ongoing efforts that are designed to address some of its longstanding challenges. For example, in 2007, FPS decided to adopt an inspector-based workforce approach to protect GSA facilities. Under this approach, the composition of FPS’s workforce will change from a combination of inspectors and police officers to mainly inspectors. The inspectors will be required to complete law enforcement activities such as patrolling and responding to incidents at GSA facilities concurrently with their physical security activities. FPS will also place more emphasis on physical security, such as BSAs, and less emphasis on the law enforcement part of its mission; contract guards will continue to be the front-line defense for protection at GSA facilities; and there will be a continued reliance on local law enforcement. According to FPS, an inspector-based workforce will help it to achieve its strategic goals such as ensuring that its staff has the right mix of technical skills and training needed to accomplish its mission and building effective relationships with its stakeholders. However, the inspector-based workforce approach presents some additional challenges for FPS. For example, the approach does not emphasize law enforcement responsibilities, such as proactive patrol. Reports issued by multiple government entities acknowledge the importance of proactive patrol in detecting and deterring terrorist surveillance teams, which use information such as the placement of armed guards and proximity to law enforcement agency stations when choosing targets and planning attacks. Active law enforcement patrols in and around federal facilities can potentially disrupt these sophisticated surveillance and research techniques. In addition, having inspectors perform both law enforcement and physical security duties simultaneously may prevent some inspectors from responding to criminal incidents in a timely manner and patrolling federal buildings. FPS stated that entering into memorandums of agreement with local law enforcement agencies was an integral part of the inspector-based workforce approach because it would ensure law enforcement response capabilities at facilities when needed. According to FPS’s Director, the agency recently decided not to pursue memorandums of agreement with local law enforcement agencies, in part, because of reluctance on the part of local law enforcement officials to sign such memorandums. In addition, FPS believes that the agreements are not necessary because 96 percent of the properties in its inventory are listed as concurrent jurisdiction facilities where both federal and state governments have jurisdiction over the property. Nevertheless, the agreements would clarify roles and responsibilities of local law enforcement agencies when responding to crime or other incidents. However, FPS also provides facility protection to approximately 400 properties where the federal government maintains exclusive federal jurisdiction. Under exclusive federal jurisdiction, the federal government has all of the legislative authority within the land area in question and the state has no residual police powers. Furthermore, state and local law enforcement officials are not authorized to enforce state and local laws or federal laws and regulations at exclusive federal jurisdiction facilities. According to ICE’s legal counsel, if the Secretary of Homeland Security utilized the facilities and services of state and local law enforcement agencies, state and local law enforcement officials would only be able to assist FPS in functions such as crowd and traffic control, monitoring law enforcement communications and dispatch, and training. Memorandums of agreement between FPS and local law enforcement agencies would help address the jurisdictional issues that prevent local law enforcement agencies from providing assistance at facilities with exclusive federal jurisdiction. As an alternative to memorandums of agreement, according to FPS’s Director, the agency will rely on the informal relationships that exist between local law enforcement agencies and FPS. However, whether this type of relationship will provide FPS with the type of assistance it will need under the inspector-based workforce is unknown. Officials from five of the eight local law enforcement agencies we interviewed stated that their agency did not have the capacity to take on the additional job of responding to incidents at federal buildings and stated that their departments were already strained for resources. FPS and local law enforcement officials in the regions we visited also stated that jurisdictional authority would pose a significant barrier to gaining the assistance of local law enforcement agencies. Representatives of local law enforcement agencies also expressed concerns about being prohibited from entering GSA facilities with service weapons, especially courthouses. Similarly, local law enforcement officials in a major city stated that they cannot make an arrest or initiate a complaint on federal property, so they have to wait until a FPS officer or inspector arrives. Another effort FPS has begun is to address its operational challenges by recruiting an additional 150 inspectors to reach the mandated staffing levels in the fiscal year 2008 Consolidated Appropriations Act. According to the Director of FPS, the addition of 150 inspectors to its current workforce will allow FPS to resume providing proactive patrol and 24- hour presence based on risk and threat levels at some facilities. However, these additional 150 inspectors will be assigned to eight of FPS’s 11 regions and thus will not have an impact on the three regions that will not receive them. In addition, while this increase will help FPS to achieve its mission, this staffing level is still below the 1,279 employees that FPS had at the end of fiscal year 2006 when, according to FPS officials, tenant agencies experienced a decrease in service. FPS’s Risk Management Division is also in the process of developing a new tool referred to as the Risk Assessment Management Program (RAMP) to replace its current system (FSRM) for completing BSAs. According to FPS, a pilot version of RAMP is expected to be rolled out in fiscal year 2009. The RAMP will be accessible to inspectors via a secure wireless connection anywhere in the United States and will guide them through the process of completing a BSA to ensure that standardized information is collected on all GSA facilities. According to FPS, once implemented, RAMP will allow inspectors to obtain information from one source, generate reports automatically, enable the agency to track selected countermeasures throughout their lifecycle, address some issues with the subjectivity of BSAs, and reduce the amount of time spent on administrative work by inspectors and managers. FPS funds its operations through the collection of security fees charged to tenant agencies for security services. However, until recently these fees have not been sufficient to cover its projected operational costs. FPS has addressed this gap in a variety of ways. When FPS was located in GSA it received additional funding from the Federal Buildings Fund to cover the gap between collections and costs. Since transferring to DHS, to make up for the projected shortfalls to ensure that security at GSA facilities would not be jeopardized, and to avoid a potential Anti-deficiency Act violation in fiscal year 2005, FPS instituted a number of cost saving measures that included restricted hiring and travel, limited training and overtime, and no employee performance awards. In addition, in fiscal year 2006, DHS had to transfer $29 million in emergency supplemental funding to FPS. FPS also increased the basic security fee charged to tenant agencies from 35 cents per square foot in fiscal year 2005 to 62 cents per square foot in fiscal year 2008. Because of these actions, fiscal year 2007 was the first year FPS’s collections were sufficient to cover its costs. FPS also projects that collections will cover its costs in fiscal year 2008. In fiscal year 2009, FPS’s basic security fee will increase to 66 cents per square foot, which represents the fourth time FPS has increased the basic security fee since transferring to DHS. However, according to FPS, its cost savings measures have had adverse implications, including low morale among staff, increased attrition and the loss of institutional knowledge, as well as difficulties in recruiting new staff. In addition, several FPS police officers and inspectors said that overwhelming workloads, uncertainty surrounding their job security, and a lack of equipment have diminished morale within the agency. These working conditions could potentially impact the performance and safety of FPS personnel. FPS officials said the agency has lost many of their most experienced law enforcement staff in recent years and several police officers and inspectors said they were actively looking for new jobs outside FPS. For example, FPS reports that 73 inspectors, police officers, and physical security specialists left the agency in fiscal year 2006, representing about 65 percent of the total attrition in the agency for that year. Attrition rates have steadily increased from fiscal years 2004 through 2007, as shown in figure 3. For example, FPS’s overall attrition rate increased from about 2 percent in fiscal year 2004 to about 14 percent in fiscal year 2007. The attrition rate for the inspector position has increased, despite FPS’s plan to move to an inspector-based workforce. FPS officials said its cost-saving measures have helped the agency address projected revenue shortfalls. The measures have been eliminated in fiscal year 2008. In addition, according to FPS, these measures will not be necessary in fiscal year 2009 because the basic security fee was increased and staffing has decreased. FPS’s primary means of funding its operations is the fee it charges tenant agencies for basic security services, as shown in figure 4. Some of the basic security services covered by this fee include law enforcement activities at GSA facilities, preliminary investigations, the capture and detention of suspects, and BSAs, among other services. The basic security fee does not include contract guard services. However, this fee does not fully account for the risk faced by particular buildings or the varying levels of basic security services provided, and does not reflect the actual cost of providing services. In fiscal year 2008, FPS charged 62 cents per square foot for basic security and has been authorized to increase the rate to 66 cents per square foot in fiscal year 2009. FPS charges federal agencies the same basic security fee regardless of the perceived threat to that particular building or agency. Although FPS categorizes buildings into security levels based on its assessment of the building’s risk and size, this categorization does not affect the security fee charged by FPS. For example, level I facilities typically face less risk because they are generally small storefront-type operations with a low level of public contact, such as a small post office or Social Security Administration office. However, these facilities are charged the same basic security fee of 62 cents per square foot as a level IV facility that has a high volume of public contact and may contain high-risk law enforcement and intelligence agencies and highly sensitive government records. In addition, FPS’s basic security rate has raised questions about equity because federal agencies are required to pay the fee regardless of the level of service FPS provides or the cost of providing the service. For instance, in some of the regions we visited, FPS officials described situations in which staff is stationed hundreds of miles from buildings under its responsibility. Many of these buildings rarely receive services from FPS staff and rely mostly on local police for law enforcement services. However, FPS charges these tenant agencies the same basic security fees as those buildings in major metropolitan areas in which numerous FPS police officers and inspectors are stationed and are available to provide security services. FPS’s cost of providing services is not reflected in its basic security charges. For instance, a June 2006 FPS workload study estimating the amount of time spent on various security services showed differences in the amount of resources dedicated to buildings at various security levels. The study said that FPS staff spend approximately six times more hours providing security services to higher-risk buildings (levels III and IV buildings) compared to lower-risk buildings (levels I and II buildings). In addition, a 2007 Booz Allen Hamilton report of FPS’s operational costs found that FPS does not link the actual cost of providing basic security services with the security fees it charges tenant agencies. The report recommends incorporating a security fee that takes into account the complexity or the level of effort of the service being performed for the higher level security facilities. The report states that FPS’s failure to consider the costs of protecting buildings at varying risk levels could result in some tenants being overcharged. We also have reported that basing government fees on the cost of providing a service promotes equity, especially when the cost of providing the service differs significantly among different users, as is the case with FPS. Several stakeholders have raised questions about whether FPS has an accurate understanding of the cost of providing security at GSA facilities. An ICE Chief Financial Office official said FPS has experienced difficulty in estimating its costs because of inaccurate cost data. In addition, OMB officials said they have asked FPS to develop a better cost accounting system in past years. The 2007 Booz Allen Hamilton report found that FPS does not have a methodology to assign costs to its different security activities and that it should begin capturing the cost of providing various security services to better plan, manage and budget its resources. We have also previously cited problems with ICE’s and FPS’s financial system, including problems associated with tracking expenditures. We also have previously reported on the importance of having accurate cost information for budgetary purposes and to set fees and prices for services. We have found that without accurate cost information it is difficult for agencies to determine if fees need to be increased or decreased, accurately measure performance, and improve efficiency. To determine how well it is accomplishing its mission to protect GSA facilities, FPS has identified some output measures, such as determining whether security countermeasures have been deployed and are fully operational, the amount of time it takes to respond to an incident and the percentage of BSAs completed on time. Output measures assess activities, not the results of those activities. However, FPS has not developed outcome measures to evaluate the results and the net effect of its efforts to protect GSA facilities. While output measures are helpful, outcome measures are also important because they can provide FPS with broader information on program results, such as the extent to which its decision to move to an inspector-based workforce will enhance security at GSA facilities or help identify the security gaps that remain at GSA facilities and determine what action may be needed to address them. The Government Performance and Results Act requires federal agencies to, among other things, measure agency performance in achieving outcome oriented goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their performance. In addition, we and other federal agencies have maintained that adequate and reliable performance measures are a necessary component of effective management. We have also found that performance measures should provide agency managers with timely, action-oriented information in a format conducive to helping them make decisions that improve program performance, including decisions to adjust policies and priorities. FPS is also limited in its ability to assess the effectiveness of its efforts to protect GSA facilities, in part, because it does not have a data management system that will allow it to provide complete and accurate information on its security program. Without a reliable data management system, it is difficult for FPS and others to determine the effectiveness of its efforts to protect GSA facilities or for FPS to accurately track and monitor incident response time, effectiveness of security countermeasures, and whether BSAs are completed on time. Currently, FPS primarily uses the Web Records Management System (WebRMS) and Security Tracking System to track and monitor output measures. However, FPS acknowledged that there are weaknesses with these systems which make it difficult to accurately track and monitor its performance. In addition, according to many FPS officials at the seven regions we visited, the data maintained in WebRMS may not be a reliable and accurate indicator of crimes and other incidents because FPS does not write an incident report for every incident, all incidents are not entered into WebRMS and because the types and definitions of items prohibited in buildings vary not only region by region, but also building by building. For example, a can of pepper spray may be prohibited in one building, but allowed in another building in the same region. According to FPS, having fewer police officers has also decreased the total number of crime and incident reports entered in WebRMS because there is less time spent on law enforcement activities. The officials in one FPS region we visited stated that two years ago there were 25,000 reports filed through WebRMS, however this year they are projecting about 10,000 reports because there are fewer FPS police officers to respond to an incident and write a report if necessary. In conclusion, Madam Chair, our work shows that FPS has faced and continues to face multiple challenges in ensuring that GSA facilities, their occupants, and visitors, are protected from crime and the risk of terrorist attack. In the report we issued last week, we recommended that the Secretary of Homeland Security direct the Director of FPS to develop and implement a strategic approach to manage its staffing resources; clarify roles and responsibilities of local law enforcement agencies in regards to responding to incidents at GSA facilities; improve FPS’s use of the fee-base dsystem by developing a method to accurately account for the cost of providing security services to tenant agencies; assess whether FPS’s current use of a fee-based system or an alternative funding mechanism is the most appropriate manner to fund the agency; and develop and implement specific guidelines and standards for measuring its performance including the collection and analysis of data. DHS concurred with these recommendations and we are encouraged that FPS is in the process of addressing them. This concludes our testimony. We are pleased to answer any questions you might have. For further information on this testimony, please contact Mark Goldstein at 202-512-2834 or by email at [email protected]. Individuals making key contributions to this testimony include Daniel Cain, Tammy Conquest, Colin Fallon, Katie Hamer, Daniel Hoy, and Susan Michal-Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Federal Protective Service (FPS) is responsible for providing physical security and law enforcement services to about 9,000 General Services Administration (GSA) facilities. To accomplish its mission of protecting GSA facilities, FPS currently has an annual budget of about $1 billion, about 1,100 employees, and 15,000 contract guards located throughout the country. GAO was asked to provide information and analysis on challenges FPS faces including ensuring that it has sufficient staffing and funding resources to protect GSA facilities and the over one million federal employees as well as members of the public that work in and visit them each year. GAO discusses (1) FPS's operational challenges and actions it has taken to address them, (2) funding challenges, and (3) how FPS measures the effectiveness of its efforts to protect GSA facilities. This testimony is based on our recently issued report (GAO-08-683) to this Subcommittee. FPS faces several operational challenges that hamper its ability to accomplish its mission and the actions it has taken may not fully resolve these challenges. FPS's staff has decreased by about 20 percent from fiscal years 2004 through 2007. FPS has also decreased or eliminated law enforcement services such as proactive patrol in many FPS locations. Moreover, FPS has not resolved longstanding challenges, such as improving the oversight of its contract guard program, maintaining security countermeasures, and ensuring the quality and timeliness of building security assessments (BSA). For example, one regional supervisor stated that while reviewing a BSA for an address he personally visited he realized that the inspector completing the BSA had falsified the information because the inspector referred to a large building when the actual site was a vacant plot of land owned by GSA. To address some of these operational challenges, FPS is currently changing to an inspector based workforce, which seeks to eliminate the police officer position and rely primarily on FPS inspectors for both law enforcement and physical security activities. FPS is also hiring an additional 150 inspectors. However, these actions may not fully resolve the challenges FPS faces, in part because the approach does not emphasize law enforcement responsibilities. Until recently, the security fees FPS charged to agencies have not been sufficient to cover its costs and the actions it has taken to address the shortfalls have had adverse implications. For example, the Department of Homeland Security (DHS) transferred emergency supplemental funding to FPS. FPS restricted hiring and limited training and overtime. According to FPS officials, these measures have had a negative effect on staff morale and are partially responsible for FPS's high attrition rates. FPS was authorized to increase the basic security fee four times since it transferred to DHS in 2003, currently charging tenant agencies 62 cents per square foot for basic security services. Because of these actions, FPS's collections in fiscal year 2007 were sufficient to cover costs, and FPS projects that collections will also cover costs in fiscal year 2008. However, FPS's primary means of funding its operations--the basic security fee--does not account for the risk faced by buildings, the level of service provided, or the cost of providing services, raising questions about equity. Stakeholders also expressed concern about whether FPS has an accurate understanding of its security costs. FPS has developed output measures, but lacks outcome measures to assess the effectiveness of its efforts to protect federal facilities. Its output measures include determining whether security countermeasures have been deployed and are fully operational. However, FPS does not have measures to evaluate its efforts to protect federal facilities that could provide FPS with broader information on program outcomes and results. FPS also lacks a reliable data management system for accurately tracking performance measures. Without such a system, it is difficult for FPS to evaluate and improve the effectiveness of its efforts, allocate its limited resources, or make informed risk management decisions.
|
Taxol is currently used to treat several types of cancer, including advanced ovarian and breast cancer, certain lung cancers (non-small cell) in patients who cannot have surgery or radiation therapy, and AIDS- related Kaposi’s sarcoma. The bioactive compound in Taxol was first extracted from the bark of the slow-growing Pacific yew tree Taxus brevifolia in the 1960s. Following this discovery, the drug was developed primarily through research funded by NIH, and then transferred to the private sector and successfully commercialized by BMS. The 1991 NIH-BMS CRADA was one of the first CRADAs to result in a breakthrough drug. The groundwork for the public-private partnership that fostered the success of Taxol was laid in 1980. Prior to that time, the government generally retained title to any inventions created under federal research grants and contracts. This situation became a source of dissatisfaction because of a general belief that the results of government- owned research were not being made widely available for the public’s benefit. For example, there were concerns that biomedical and other technological advances resulting from federally funded research at universities were not leading to new products because the universities had little incentive to seek uses for inventions to which the government held title. In 1980, the Congress passed two landmark pieces of legislation—the Stevenson-Wydler Technology Innovation Act of 1980 and the Bayh-Dole Act—with the intent of promoting economic development, enhancing U.S. competitiveness, and benefiting the public by encouraging the commercialization of technologies developed with federal funding. Although the acts have common objectives, the Stevenson-Wydler Act focuses on inventions owned by the federal government, while the Bayh- Dole Act focuses on inventions created under federal contracts, grants, and cooperative research and development agreements. Under the Stevenson-Wydler Act, inventions owned by the government remain the property of the agencies that produce them. However, the act as amended sets out guidelines and priorities that encourage commercialization of these inventions through the licensing of technology to U.S. business. In 1986 the Federal Technology Transfer Act amended the Stevenson-Wydler Act and enhanced the authority of federal agencies in this area, authorizing them to enter into CRADAs with nonfederal partners to conduct research. The Bayh-Dole Act authorizes federal agencies to execute license agreements with commercial entities to promote the development of federally owned inventions, and to collect royalties for such licenses. The act also gives small businesses, universities, and other nonprofit organizations the right to retain title to and profit from the inventions arising from their federally funded research, provided they adhere to certain requirements. In 1983, a presidential memorandum extended this patent policy to large businesses. The act also contains several provisions to protect the public’s interest in commercializing federally funded inventions, such as a requirement that a contractor or grantee that retains title to a federally funded invention file for patent protection and attempt commercialization. In return, the government retains the right to use the inventions without paying royalties. In general, most biomedical inventions are not a final end product; therefore the government rights would not extend to a final product. NIH, with a budget of over $23 billion in fiscal year 2002, is the principal federal agency that conducts and funds biomedical research, including research on drugs. Within NIH, OTT is responsible for licensing the inventions of NIH employees to the private sector for development to benefit the public health. OTT oversees patent prosecution, negotiates and monitors licensing agreements, and provides oversight and central policy review of CRADAs. NIH’s stated goals with regard to the technology transfer process are, in order of priority, to foster scientific discoveries, to facilitate the rapid transfer of discoveries to the bedside, to make resulting products accessible to patients, and to earn income. NIH has broad authority under the statutes described above to negotiate agreements with outside partners in pursuit of its technology transfer goals. NIH scientists and laboratories, scientists and laboratories in academia or other research institutions that receive public funding, and industry researchers are often all involved in the development of pharmaceuticals. Usually, government and academic scientists conduct basic research on the biology of a disease and identify compounds, methods, and chemical reactions and pathways that may be of value in treating disease. They also conduct preclinical and clinical testing of drugs (phase 1 and 2 trials). Industry conducts more extensive clinical trials (phase 3 trials) and markets the drugs, although there is some overlap in these roles. NIH’s overall mission and authority, as well as the requirements of the Federal Food Drug and Cosmetic Act, suggest that NIH cannot sponsor a drug through FDA’s new drug application (NDA) process. This act requires those who submit NDAs to FDA to provide “a full description of the methods used in, and the facilities and controls used for, the manufacture, processing, and packing, of such drug.” While NIH conducts its own research and funds biomedical research at other institutions, it does not have a manufacturing, processing, or packing facility. NIH can, however, license inventions directly to pharmaceutical firms without the necessity of working through a CRADA. For example, NIH officials told us that of the 16 drugs and vaccines currently approved by FDA that contain an NIH technology, only 3 involved a CRADA. To attract private-sector partners, NIH publicizes the availability of technologies that it seeks to license directly. NIH officials told us that it has entered into CRADAs with private-sector partners in at least two other cases that were similar to paclitaxel—naturally occurring substances for which shortages had limited NIH’s ability to conduct research. The Public Health Service (PHS) created a model CRADA because the Federal Technology Transfer Act of 1986 provided few specifics about the CRADA process. In general, the model CRADA sets forth the policies of NIH and other PHS agencies on various aspects of cooperative research and intellectual property licensing that derive from the Federal Technology Transfer Act. The model CRADA has been updated several times over the years. The 1991 CRADA between NIH and BMS referred to a March 27, 1989, version of the model CRADA. The 1989 model CRADA stated that NIH would be willing to grant exclusive licenses to its CRADA collaborators. The 1989 model CRADA also contained a provision known as the “reasonable price clause.” It stated that PHS has “a concern that there be a reasonable relationship between the pricing of a licensed product, the public investment in that product, and the health and safety needs of the public. Accordingly, exclusive commercialization licenses granted for intellectual property rights may require that this relationship be supported by reasonable evidence.” NIH dropped the reasonable pricing clause in 1995, and the current version of the model CRADA no longer has any stipulation regarding the pricing of products that are developed under the CRADA. Under federal law and NIH policy, royalty income from license agreements is shared between the inventors and the institute or center within NIH in which the technology was developed. NIH uses the royalties for multiple purposes that contribute to the technology transfer program and the research of its laboratories. Specifically, the royalty payments can be used to (1) reward employees of the laboratory, (2) further scientific exchange among the laboratories of the agency, (3) educate and train employees of the agency or laboratory, (4) support other activities that increase the potential for transfer of the technology of the laboratories of the agency, (5) pay expenses incidental to the administration and licensing of intellectual property by the agency or laboratory, and (6) support scientific research and development consistent with the research and development missions and objectives of the laboratory. Federal laws also generally prohibit agencies from disclosing information that concerns or relates to trade secrets, processes, operations, statistical information, and related information. Therefore the federal technology transfer process that NIH engages in with the private sector is not entirely transparent to the general public, nor are the details of the negotiations and agreements that NIH makes with industry partners publicly known. However, information may be disclosed to those who have oversight authority over the agencies that generate such information, such as the Congress and its oversight bodies. In this way, information about the details of the federal investment and return on investment in the commercialization of a drug like Taxol can be examined for policymaking purposes. NIH played a role in both basic and clinical research leading to the development and use of Taxol. In 1958, NCI, a component of NIH, initiated the Natural Products Program, which screened 35,000 plant species for anticancer activity. Researchers at the Research Triangle Institute found that an extract from the bark of the Pacific yew tree had antitumor activity in 1963 and isolated the compound paclitaxel in the bark of the Pacific yew in 1971. In 1979, scientists at Albert Einstein College of Medicine discovered how paclitaxel works to prevent cell division. In 1983, NCI filed an investigational new drug application (IND) with FDA to initiate clinical trials of paclitaxel. The IND was approved, and phase 1 trials began. In 1985, NCI began funding phase 2 clinical trials. By 1989, two studies of paclitaxel’s effect on ovarian cancer had demonstrated positive results. In August 1989, NIH announced in a Federal Register notice that it was seeking a pharmaceutical company that could develop paclitaxel to a marketable status. The notice stated that paclitaxel could not be patented. Instead, NIH offered a potential CRADA partner the exclusive rights to the source data from its clinical trials. Although 20 commercial firms replied to the announcement, only 4 companies, BMS among them, decided to apply for the CRADA opportunity. NIH chose BMS as its CRADA partner, and the CRADA, “Clinical Development of Taxol,” took effect on January 23, 1991. (For details on the CRADA partner selection process, see app. I.) Under the 1991 CRADA, NCI and BMS agreed to collaborate on ongoing and future clinical studies to obtain FDA approval for the marketing of paclitaxel, and NCI would make available exclusively to BMS the data and the results of all paclitaxel studies. As part of the CRADA, BMS was to supply NCI with sufficient amounts of paclitaxel for research and clinical trials. NCI could terminate the agreement if BMS “failed to exercise best efforts in the commercialization of taxol .” Following this first Taxol-related CRADA, NIH entered into another CRADA with BMS in 1998 and has had other paclitaxel-related CRADAs with two other companies (see app. II). In 1991, a phase 2 trial of paclitaxel demonstrated its effectiveness in treating breast cancer. In 1992, BMS filed and received approval for trademark protection for the name Taxol. Also in 1992, BMS filed an NDA for Taxol with FDA. On December 29, 1992, FDA approved Taxol for the treatment of ovarian cancer, an indication for which it had been shown to be effective in earlier studies. In January 1993, Taxol was introduced into the marketplace by BMS for the treatment of ovarian cancer. FDA’s approval of BMS’s NDA to market Taxol for the treatment of ovarian cancer triggered a provision in federal law granting BMS 5 years of marketing exclusivity for Taxol as a new chemical entity under the Drug Price Competition and Patent Term Restoration Act of 1984. The statute provides marketing protection for unpatentable pharmaceuticals, stating that during this 5-year period “no application…may be submitted” to FDA that “refers” to the approved drug, a provision that generally prohibits the introduction of a generic drug during the exclusivity period. Prior to the expiration of this period, in June 1997, BMS received two patents regarding the administration of Taxol. In July 1997, a number of generic drug manufacturers filed applications with FDA to market a generic version of paclitaxel, and notified BMS of their intent. BMS then filed suit in a federal district court alleging violations of its most recent patents. Under federal law, this granted BMS an additional 30 months of marketing exclusivity while the issues were being resolved in court. (See the chronology in app. III for more information on the research and development of Taxol.) The NIH-BMS collaboration provided BMS access to NIH research results that were critical for BMS’s quick commercialization of Taxol. It provided other benefits for both parties and for the health of the public as well. BMS supplied paclitaxel to NIH, enabling NCI to dramatically expand its paclitaxel research. BMS later licensed three NIH inventions that resulted from the CRADA; however, BMS ultimately decided not to use any of the inventions in its applications to FDA for approval to market Taxol for additional indications. An NIH grant led to the important discovery of a method for the semisynthesis of paclitaxel by FSU researchers. The NIH-BMS collaboration gave BMS unlimited access to NIH research results that were critical to BMS’s ability to quickly receive FDA approval to market Taxol. BMS submitted an NDA for paclitaxel to FDA on July 21, 1992, 18 months after the 1991 CRADA took effect, and FDA approved the drug for initial marketing on December 29, 1992. Paclitaxel was one of the first oncological compounds tested by NCI, and the public health community was highly interested in exploring its potential. The collaboration between NIH and BMS was beneficial to BMS because it gained access to the results of NIH’s basic, preclinical, and clinical research studies related to paclitaxel, including NIH studies conducted both prior to and during the term of the CRADA. Prior to the signing of the 1991 CRADA, and during the first 2 years of the CRADA, NCI conducted most of the clinical trials associated with paclitaxel. These studies were important for securing FDA’s initial approval to market Taxol for the treatment of advanced ovarian cancer. Five of the six studies submitted to FDA by BMS in support of its marketing application were either conducted or funded by NIH; one was conducted by BMS. BMS subsequently applied to FDA to market Taxol for other indications, including metastatic breast cancer and AIDS-related Kaposi’s sarcoma. BMS has received FDA approval to market Taxol for eight indications as of May 12, 2003. Under the terms of the 1991 CRADA, BMS supplied paclitaxel for NCI’s own studies as well as for NCI-funded trials at other institutions that were initiated pursuant to the CRADA. Three months after the CRADA was signed, BMS began shipments of paclitaxel to NIH. BMS reported that by the end of 1991, 1.35 kilograms of bulk drug, or 45,000 vials, had been delivered. In January 1992, shipments were increased from 5,000 vials per month to 25,000 vials per month, and by April 50,000 vials per month were being provided at no charge to NIH. BMS’s shipments of paclitaxel overcame shortages that had limited NCI research. In 1989, before the CRADA, a cumulative total of fewer than 500 patients had been treated with paclitaxel. Because of BMS’s efforts to expand the collection and production of paclitaxel, NCI was able to establish more than 40 treatment referral centers for therapy of patients with refractory ovarian cancer (previously treated, unresponsive ovarian cancer) and breast cancer. According to NCI, 28,882 patients were treated in its clinical trials over the course of the CRADA, and the paclitaxel was supplied free of charge by BMS to NCI for use in both the clinical trials and the treatment centers. In 1996, NIH signed an agreement to license to BMS three patented paclitaxel-related inventions that resulted from the 1991 CRADA. While the compound itself was not patented, NIH patented three methods for using paclitaxel in cancer treatment. These inventions were (1) use of G- CSF (granulocyte colony-stimulating factor) to avoid the side effects of using Taxol in higher doses, (2) a 96-hour infusion method to overcome multidrug resistance, and (3) a method for using Taxol in combination with another drug (cisplatin). BMS licensed these three inventions because it thought they had potential to provide important contributions to treatment. BMS considered adding these methods as new indications to the Taxol product label, but ultimately decided not to use any of the inventions in its applications to FDA for approval to market the drug. The supply of natural paclitaxel was a continuing problem, since the bark of the Pacific yew was scarce and it took about 10,000 to 30,000 pounds of dried bark to produce about 1 kilogram of the compound. Under the terms of the 1991 CRADA, BMS agreed to initiate an aggressive search for alternative sources of paclitaxel to lessen or eliminate dependence on the Pacific yew. Prior to the signing of the CRADA, however, NCI had funded research at FSU that led to the development of a semisynthetic process for producing paclitaxel that started the manufacturing process with materials from another type of yew tree that was plentiful. NIH provided about $2 million in funding to FSU for this research. Researchers at FSU patented the semisynthesis process in 1989 and subsequently licensed the patent to BMS in 1990. Under the terms of the license agreement, BMS paid FSU substantial royalties for this patent in order to increase the supply of Taxol. BMS officials told us that BMS did not start using the FSU invention to manufacture Taxol until 1996. Although NIH estimates that it has invested heavily in research related to paclitaxel, its financial benefits from the collaboration with BMS have not been great in comparison to BMS’s revenue from the drug. NIH estimates that it has invested $183 million in research related to paclitaxel from 1977 through 1997, the end of the CRADA’s term, although not all of this was for research supporting the 1991 CRADA. For one portion of its investment in Taxol, NIH estimates that its net cost for conducting clinical trials that supported the development of Taxol through the 1991 CRADA was $80 million—NIH estimates that it spent $96 million on the studies, and this expense was offset by $16 million in financial support from BMS. We estimate that the paclitaxel BMS supplied NIH through the CRADA had a value of $92 million. In addition, NIH spent an additional $301 million on paclitaxel-related research from 1998 through 2002, some of which supported cancer research, bringing NIH’s total investment in paclitaxel- related research from 1977 to 2002 to $484 million. Overall, BMS officials told us that the company spent $1 billion to develop Taxol. Worldwide sales of Taxol have totaled over $9 billion through 2002. As a result of its license agreement with BMS, NIH has received $35 million in royalty payments. The 1991 CRADA noted NIH’s concern that Taxol be fairly priced given the public investment in Taxol research and the health needs of the public, but it did not require that reasonable evidence be presented to show that this had occurred. The federal government has been a major payer for Taxol, primarily through Medicare. For example, Medicare payments for Taxol totaled $687 million from 1994 through 1999. Based on figures provided by NIH of its yearly expenditures for all research involving paclitaxel, we estimate that NIH spent $183 million on paclitaxel-related research from 1977 through 1997, the end of the CRADA’s term. NIH officials told us that these figures reflect all NIH research using paclitaxel—even when it is given to patients as the standard of care in studies of other remedies—not just research investigating paclitaxel and Taxol. This figure includes spending for research on the effectiveness of paclitaxel for conditions other than cancer as well as research to develop analogues or alternative compounds to paclitaxel to increase the number of available drugs. We estimate NIH spent an additional $301 million on paclitaxel-related research from 1998 through 2002, some of which supported cancer research, bringing NIH’s total investment in paclitaxel-related research from 1977 to 2002 to $484 million. (See fig. 1.) NIH estimates that its net expenditures to conduct clinical trials that supported the 1991 CRADA were $80 million. NIH estimates that it spent $96 million to conduct the clinical trials and BMS provided a reimbursement of $16 million to offset the costs of the studies. NIH’s estimate includes costs incurred during the CRADA and costs associated with clinical trials conducted prior to the CRADA, the results of which helped BMS obtain FDA approval to market Taxol. Almost all ($15.6 million) of BMS’s financial support was paid to offset clinical trial costs during the last several years of the CRADA. In addition, we estimate the paclitaxel BMS supplied to NIH under the CRADA had a value of $92 million (based on FSS prices). NIH’s financial benefits from the collaboration with BMS have not been great in comparison with BMS’s revenue from the drug. In 1996, when BMS licensed from NIH three patents on methods for using Taxol in cancer treatment, it negotiated its first and only license agreement with NIH for Taxol, requiring BMS to pay royalties to NIH at a rate of 0.5 percent of its worldwide sales of Taxol. The NIH-BMS license agreement resulted in about $35 million in royalties for NIH through 2002. NIH reports that 10 individual inventors received 22 percent of the total $35.3 million in royalty payments, or an aggregated amount of $7.7 million, while NIH kept the remainder, $27.5 million. Worldwide Taxol sales totaled over $9 billion from 1993 through 2002. Sales exceeded $1 billion annually from 1998 through 2001 (see table 1). BMS officials told us that the company invested over $1 billion toward the development of Taxol since signing the CRADA in January 1991. Costs included supporting clinical trials (including its payments to NIH), preparing the NDA, and finding alternative sources of the compound through yew cultivation and research on the semisynthesis process and plant cell culture techniques. For example, BMS officials told us that the company’s clinical trials had enrolled over 21,000 patients by 1997. At the time the 1991 CRADA was negotiated, NIH had a reasonable pricing policy that there should be “a reasonable relationship between the pricing of a licensed product, the public investment in that product, and the health and safety needs of the public.” NIH’s standard reasonable pricing clause was modified in the 1991 CRADA. The CRADA noted NIH’s concern that “there be a reasonable relationship between the pricing of Taxol, the public investment in Taxol research and development, and the health and safety needs of the public.” BMS agreed in the 1991 CRADA that these factors would be taken into account in establishing a fair market price. However, the 1991 CRADA did not require that reasonable evidence be presented to show that this would occur. In its comments on a draft of this report, NIH stated it gathered other evidence to reach its conclusion that the price of Taxol was reasonable. NIH also entered into a CRADA with another company to develop a product that could provide competition for Taxol (see CRADA 148 in app. II). This alternative product, Taxotere (docetaxel), received its first marketing approval from FDA in 1996. The federal government, primarily through Medicare, has been a major payer for Taxol. Medicare payments for Taxol totaled $687 million from 1994 through 1999, the last full year of marketing exclusivity for Taxol. Medicare payments for Taxol were $202 million in 1999, accounting for more than one-fifth of Taxol’s total domestic sales. Medicare’s payments reflect, in part, the price it pays for Taxol. Compared to other federal programs, Medicare pays relatively more for Taxol than it does for other widely used cancer drugs. To assess the pricing of Taxol, we reviewed the price Medicare pays for Taxol and other cancer drugs compared to the prices paid by federal programs that directly procure these drugs. We found that in the fourth quarter of 2002, Medicare paid 6.6 times the price these other federal programs paid for Taxol, while it paid an average of 3.0 times the price these other federal programs paid for other widely used cancer drugs. Although NIH has broad authority under applicable statutes to negotiate CRADAs and license agreements with outside partners, several factors affected its exercise of that authority in the technology transfer activities related to the development of Taxol. Such negotiations involve a weighing of NIH’s goals and priorities with those of a potential partner, recognizing that tradeoffs may be necessary to reach an agreement. In the case of Taxol, NIH’s ability to exercise its authority was limited because it did not have a patent on paclitaxel and because its evaluation found that there was a shortage of available, qualified alternative CRADA partners. With regard to the license negotiations on the inventions resulting from the CRADA, the setting of royalties was affected by the criteria that both NIH and BMS used to help guide royalty negotiations. BMS officials told us that NIH’s inventions did not contribute to BMS’s successful marketing of Taxol. One factor affecting NIH’s CRADA negotiating position is its ability to offer a potential partner exclusive marketing rights to an invention. In its paclitaxel negotiations, NIH’s position was affected by the fact that it did not have a patent on paclitaxel. As NIH acknowledged in the 1991 CRADA, because of this NIH was unable to grant any potential partner an exclusive patent license to market paclitaxel. NIH was able to offer potential partners access to the findings of the research it conducted prior to the CRADA and to its research during the term of the CRADA. Another factor affecting the leverage that NIH has in negotiating a CRADA is the availability of other qualified applicants. If NIH were to be dissatisfied with the CRADA negotiations with an applicant, it theoretically could turn to another applicant and begin new negotiations, accepting the inherent delays. It also could seek multiple CRADA partners, recognizing that multiple partners may grant less favorable terms than one receiving an exclusive agreement. In the case of paclitaxel, it was advantageous for NIH to enter into a CRADA with an industry partner qualified to bring paclitaxel to the marketplace and to provide an adequate supply of paclitaxel for its work. NIH received four applications from potential CRADA partners. Using nine criteria to rank applications, including that an applicant have experience with both natural products and other drug development and be able to supply adequate amounts of the drug as needed for future clinical trials (see app. I), NIH reviewers scored the BMS application substantially higher than all of the others. While some concerns were raised about the BMS application, greater concerns were raised about other applications. For example, the applicant that received the second-highest score was cited as having no experience in the United States involving natural products and no experience in developing pharmaceutical agents in the United States and as providing incomplete responses, especially on how it would make Taxol available and how much it could supply annually. Applicable law does not restrict the royalty rate NIH can negotiate in a license agreement, although NIH’s model CRADA at the time of the Taxol negotiations suggested that a ceiling be set at 5 to 8 percent. This specification has since been removed, and the current model CRADA sets no ceiling. By law, NIH is required to offer its CRADA partners the option to choose an exclusive license for any inventions that arise from the CRADA work. NIH is not prohibited from specifying in the CRADA what the royalty rate will be, rather than waiting until a subsequent license agreement is negotiated. When NIH and BMS entered into the license agreement 5 years after the 1991 CRADA took effect, how the parties viewed the benefits of an agreement likely affected the royalty rate negotiations. NIH officials indicated that they generally take eight factors into account in negotiating royalty rates. These include the stage of product development, the type of product, the market value of the product, the uniqueness of the materials, the scope of the patent coverage, the market timing, NIH’s contribution to the product, and the public health benefit. An NIH OTT official reported that the ultimate determination of a royalty rate is not the result of a neat formula but is based on a balancing of these factors, with the public health benefit receiving the highest consideration. In contrast, BMS officials told us that the company considers three factors when negotiating royalty rates: scientific risk, coverage, and exclusivity. In the case of Taxol, a BMS official reported that the company determined it had high scientific risk (i.e., it did not know if the inventions would be successful), narrow coverage (i.e., the license was for very specific ways of treating a tumor), and a lack of exclusivity (i.e., the treatment regimens BMS licensed would not prevent other firms from marketing generic paclitaxel after BMS’s period of marketing exclusivity expired), all making the inventions less valuable. In general, NIH’s leverage in negotiating royalty rates is affected by the amount of competition for a license. In 2000, NIH’s director of OTT testified that the vast majority of NIH inventions require active marketing and more often than not only one firm is generally interested in licensing any particular type of technology. In fiscal year 2000, there were 45 requests for exclusive licenses, and only 2 technologies had two applications for licenses each. For nonexclusive license requests, there were 253 requests, and only 31 had more than one application. NIH’s director of OTT reported that, at that time, OTT had approximately 2,000 technologies available for licensing, 30 percent of which had been available for more than 5 years. In the case of Taxol, it is not clear whether other companies would have been interested in the inventions developed out of the CRADA, as BMS had exclusive rights to market paclitaxel at that time. From the perspectives of NIH and BMS, the 1991 CRADA is an example of a successful collaboration between the public and private sectors in pharmaceutical technology transfer. Early studies supported by NIH on the clinical effectiveness of Taxol and made available to BMS under the CRADA were critical to BMS’s success in rapidly commercializing its brand-name drug Taxol for the treatment of cancer. The additional supplies of the scarce paclitaxel provided by BMS to NIH under the CRADA were critical for the expansion of NIH’s research. NIH’s goals in the technology transfer process emphasize public health benefits over financial considerations. In the case of Taxol, the benefit to public health was clearly demonstrated, as there were few treatments for women with ovarian or breast cancer when Taxol came on the market. However the financial return to NIH was more limited. NIH made a substantial investment in the development of Taxol. In return, NIH received royalty payments of about $35 million from its license agreement with BMS, and received paclitaxel and financial support from BMS for the CRADA research. We noted that the federal government has spent over half a billion dollars in payments to health care providers for Taxol under the Medicare program. In light of the significant federal investment, questions remain regarding the extent to which NIH used its broad authority in its negotiations with BMS on the royalty payments and the price of the drug to obtain the best value for the government. We provided a draft of this report to NIH and BMS for their review. In its comments, NIH provided us with additional information about its expenditures related to the 1991 NIH-BMS CRADA and BMS’s contributions to NIH research under the CRADA, and also presented the reasons that it did not patent paclitaxel. NIH acknowledged that the 1991 CRADA did not require that evidence be presented to assure that Taxol was reasonably priced; however, NIH states that its analysis of other information led it to conclude that Taxol was fairly priced. In response, we have incorporated the new information from NIH into the report as appropriate. However, we were not able to evaluate the basis for NIH’s judgment that Taxol was fairly priced. NIH’s comments are included as appendix IV. NIH also provided technical comments, which we have incorporated as appropriate. In its comments, BMS expressed concern that our estimates of NIH’s expenditures for the development of Taxol gave an exaggerated view of NIH’s spending. We have revised our presentation of NIH’s spending based on additional information contained in NIH’s comments. BMS also expressed two concerns about our analysis of the price of Taxol to Medicare relative to other cancer drugs. First, BMS suggested that our analysis may include payments to physicians for administering the drugs in addition to the procurement price of the drugs. However, our analysis considered only the prices for drug procurement and did not include payments for physician services. Second, BMS suggested that our findings may change if our analysis excluded generic drugs and was restricted to brand name drugs. However, only 2 of the 12 comparison drugs in our analysis are generic drugs and our findings do not change if they are excluded. We found that, while Medicare generally pays more for cancer drugs than other federal programs that can directly procure pharmaceuticals, this price premium for Taxol is greater than average. BMS also made technical comments, which we incorporated as appropriate. As we agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of the report. At that time, we will send it to the Secretary of Health and Human Services, the Director of NIH, and others who are interested. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7119. Another contact and key contributors are listed in appendix V. On August 1, 1989, NIH published a notice in the Federal Register seeking a pharmaceutical company that could effectively pursue the clinical development of paclitaxel for the treatment of cancer. Included in the Federal Register announcement were nine criteria for the selection of the CRADA partner: Experience in the development of natural products for clinical use. Experience in preclinical and clinical drug development. Experience in and ability to produce, package, market, and distribute pharmaceutical products in the United States and to provide the product at a reasonable price, and experience in doing so. Experience in the monitoring, evaluation, and interpretation of the data from investigational agent clinical studies under an investigational new drug application. Willingness to cooperate with the Public Health Service in the collection, evaluation, publication, and maintaining of data from clinical trials of investigational agents. A willingness to cost-share in the development of paclitaxel, including the acquisition of raw material and isolation or synthesis of paclitaxel in adequate amounts as needed for future clinical trials and marketing. Establishment of an aggressive development plan, including appropriate milestones and deadlines for preclinical and clinical development. An agreement to be bound by the HHS rules involving human and animal subjects. Provision for equitable distribution of patent rights to any inventions. NIH’s Taxol CRADA Review Committee met on October 10, 1989, to review the applications of the four potential CRADA partners. The committee scored BMS’s application substantially higher than all of the others, with none of the other applications receiving a higher score than BMS on any of the individual criteria. Some of the strengths of the BMS application that were discussed were BMS’s extensive experience with natural products, its impressive record in the area of production of anticancer agents and substantial experience in preclinical drug development, and its bearing of financial responsibility for collection of the compound and preclinical toxicology studies. Weaknesses discussed were pricing and the estimates of available paclitaxel. The applicant receiving the second-highest score was cited as having no experience in the United States for natural products and no experience in developing drugs in the United States. NIH has had four CRADAs and one CRADA amendment related to paclitaxel (see table 2). Two of the CRADAs and the CRADA amendment were with BMS and concerned development of the drug Taxol. One CRADA was with Rhône-Poulenc Rorer (now Aventis) and involved research on Taxotere, a part of the taxane class of chemotherapy drugs, whose original source is the yew tree. It is also a treatment that can help destroy cancer cells in the body after previous chemotherapy. An additional CRADA, which is ongoing, is with Angiotech and the Johns Hopkins University and involves the use of paclitaxel to coat stents used in angioplasty. Although paclitaxel itself has not been patented, methods of administration of the drug have been patented. There are a few patents pertaining to paclitaxel (see table 3). The government has an interest in three of these patents: 5496804, 5496846, and 6150398. Patent 5496804 is for a method for treating paclitaxel side effects with G-CSF (granulocyte colony-stimulating factor); patent 5496846 is a method for using paclitaxel in a 96-hour infusion for breast cancer; and patent 6150398 is for a method of treating cancer by administration of paclitaxel and a DNA cross-linking antineoplastic agent (cisplatin). Patents 5641803 and 5670537 are held by BMS solely. One is a method for administering Taxol over 3 hours, and the other is for a method of effecting tumor regression with a low-dose, short- infusion Taxol regimen. NIH has one exclusive patent license agreement with BMS that resulted from CRADA 64, “Clinical Development of Taxol.” This license agreement covers three patents: 5496804, 5496846, and 6150398. In addition, BMS and FSU established a major license agreement concerning the semisynthetic production of Taxol. Other NIH CRADAs involving the other industry partners (i.e., Rhône-Poulenc Rorer, Angiotech, and the Johns Hopkins University) did not result in any patented inventions or license agreements. Appendix III: Chronology of the Research and Development of Taxol (Paclitaxel) The National Cancer Institute (NCI) initiates the Natural Products Program to screen 35,000 plant species for anticancer activity. Researchers at Research Triangle Institute in North Carolina find that an extract from the bark of the Pacific yew tree has antitumor activity. Researchers at Research Triangle Institute identify compound 17— paclitaxel—the active ingredient in the Pacific yew tree. Researchers at Albert Einstein College of Medicine discover how paclitaxel works to prevent cell division, by means of a mechanism called tubulin stabilization. Stevenson-Wydler Technology Innovation Act and Bayh-Dole Act enacted. NCI files an investigational new drug application (IND) to initiate clinical trials of paclitaxel. IND is approved, and phase 1 clinical trials begin. NCI begins phase 2 clinical trials. Federal Technology Transfer Act enacted. Hauser Chemical becomes contractor to NIH, collecting yew tree bark and manufacturing paclitaxel. Researchers at Florida State University (FSU), funded by NIH, patent a process for the semisynthesis of Taxol. NCI publishes a Federal Register announcement petitioning pharmaceutical companies to compete for the right to develop paclitaxel. Four companies, including Bristol-Myers Squibb (BMS), apply. FSU and BMS sign a license agreement for BMS’s use of the semisynthesis process. NCI signs CRADA with BMS for the clinical development of paclitaxel. U.S. Patent and Trademark Office approves BMS’s application to trademark the name Taxol. BMS files a new drug application (NDA) with FDA for use of Taxol to treat ovarian cancer. BMS obtains FDA approval in December for treatment of patients with metastatic carcinoma of the ovary after failure of first-line or subsequent therapy. Pacific Yew Act enacted (Pub. L. No. 102-335, 106 Stat. 859). BMS introduces Taxol into the marketplace for treatment of ovarian cancer. BMS files supplemental NDAs with the FDA, one for further defining the optimal dose and schedule of the administration of Taxol, another for use of paclitaxel as a secondary therapy for breast cancer. BMS obtains FDA approval in April for treatment of breast cancer after failure of combination chemotherapy for metastatic disease or relapse within 6 months of adjuvant chemotherapy. Prior therapy should have included an anthracycline unless clinically contraindicated. BMS obtains FDA approval in June for new dosing regimen for patients who have failed initial or subsequent chemotherapy for metastatic carcinoma of the ovary. FDA approves supplemental NDA for semisynthetic production of Taxol by using the process developed by FSU. NCI and BMS CRADA extended through December 1997. NIH is awarded patents for Taxol Treatment of Breast Cancer and Method for Treating Taxol Side Effects with G-CSF. NIH and BMS sign license agreement, whereby NIH provides BMS with exclusive rights to three NCI inventions involving Taxol. BMS is required to provide NIH with royalty payments and research support, and meet benchmarks for the clinical development of Taxol. NIH begins to receive royalty payments from BMS. BMS obtains FDA approval in August for second-line therapy for AIDS- related Kaposi’s sarcoma. Other drug companies begin developing generic versions of paclitaxel and file NDAs and abbreviated new drug applications with FDA. BMS obtains FDA approval in April for first-line therapy for the treatment of advanced carcinoma of the ovary in combination with cisplatin. BMS obtains FDA approval in June for use of Taxol injection, in combination with cisplatin, for the first-line treatment of non-small-cell lung cancer in patients who are not candidates for potentially curative surgery and/or radiation therapy. BMS obtains FDA approval in October for adjuvant treatment of node- positive breast cancer administered sequentially to standard doxorubicin- containing combination chemotherapy. First generic version of paclitaxel approved in September. Generic versions of paclitaxel enter the marketplace. BMS obtains FDA approval in June for new dosing regimen for the first- line treatment of advanced ovarian cancer: every 3 weeks at a dose of 175 milligrams per square meter of body surface followed by cisplatin at a dose of 75 mg/m. Other key contributors to this report are Helen Desaulniers, Anne Dievler, Julian Klazkin, Carolyn Feis Korman, Carolina Morgan, and Roseanne Price.
|
The transfer of technology from government-funded medical research laboratories to the private sector aims to have new pharmaceuticals brought to market more efficiently than would be possible for a federal agency acting alone. Much of the pharmaceutical-related technology transfer originates with research funded by the National Institutes of Health (NIH). GAO was asked to examine the legal and financial issues involved in technology transfers as illustrated by the research, development, and commercialization of Taxol. Taxol was developed through a cooperative research and development agreement (CRADA) between NIH and the Bristol-Meyers Squibb Company (BMS) and by 2001 had become the best-selling cancer drug in history. Specifically, GAO examined (1) how the technology transfer partnership affected the research and development of Taxol, (2) what NIH's financial investment was in Taxol-related research, and what the financial outcomes were of the technology transfer process related to Taxol, and (3) what factors influenced how NIH exercised its authority in Taxol-related technology transfer activities. GAO reviewed relevant materials and statutes governing technology transfer, reviewed the patent history of Taxol, interviewed NIH and BMS officials, and reviewed data on NIH's financial investment and drug pricing policies. The 1991 HIH-BMS CRADA was one of the first CRADAs to result in a major breakthrough drug. NIH's partnership with BMS provided the company with the research results that enabled Taxol to be commercialized quickly and made available as a treatment for cancer patients. Prior to the CRADA and during the first 2 years of the agreement, NIH conducted most of the clinical trials associated with the drug. The results of these trials were critical for BMS to secure FDA's approval in 1992 to market Taxol for the treatment of advanced ovarian cancer. As agreed in the CRADA, BMS supplied the drug to NIH researchers to overcome previous shortages. The additional supplies from BMS allowed NIH to increase the number of patients enrolled in NIH clinical trials for this drug from 500 patients by 1989 to nearly 29,000 patients over the course of the CRADA. NIH made substantial investments in the research related to Taxol, but its financial benefits from the collaboration with BMS have not been great in comparison to BMS's revenue from the drug. NIH estimates that it spent $183 million on all Taxol-related research from 1977 through the end of the CRADA's term in 1997. For one portion of its spending, NIH estimates that it spent $96 million to conduct clinical trials supporting the CRADA; this was offset by a $16 million payment from BMS. In addition, BMS supplied Taxol to NIH, the value of which GAO estimates to be $92 million. NIH spent an additional $301 million on Taxol-related research from 1998 through 2002, some of which was for cancer research, making NIH's total Taxol-related spending $484 million through 2002. BMS's sales of Taxol totaled over $9 billion from 1993 through 2002. BMS agreed to pay NIH royalties at a rate equal to 0.5 percent of worldwide sales of Taxol as part of a 1996 agreement to license three NIH Taxol-related inventions developed during the CRADA. Royalty payments to NIH have totaled $35 million. The federal government has been a major payer for Taxol, primarily through Medicare. For example, Medicare payments for Taxol totaled $687 million from 1994 through 1999. Several factors affected NIH's exercise of its broad authority in negotiating its Taxol-related technology transfer activities. First, NIH did not have a patent on Taxol and thus could not grant an exclusive patent license to a CRADA partner. Second, in NIH's evaluation, it was limited by a shortage of available, qualified alternative partners. Finally, the negotiation of royalties for NIH's Taxol-related inventions was affected by multiple considerations, including the priorities that both NIH and BMS assigned to different factors in the setting of royalties. These factors include the stage of development, the potential market value of the license, and the contribution to public health of making the product available. In commenting on a draft of this report, NIH provided additional information about its expenditures and the contributions of BMS, which GAO incorporated, and also discussed its evaluation of whether BMS's pricing of Taxol was reasonable.
|
The Small Business Innovation Development Act of 1982, which authorized the SBIR Program, designated 4 major goals for the program: to stimulate technological innovation, to use small business to meet federal R&D needs, to foster and encourage participation by minority and disadvantaged persons in technological innovation, and to increase private sector commercialization of innovations derived from federal R&D. The Small Business Research and Development Enhancement Act of 1992 reauthorized the SBIR Program and established the STTR Program, closely modeled on the SBIR Program. Eleven federal agencies participate in the SBIR Program. Five major agencies—the Department of Defense (DOD); National Aeronautics and Space Administration (NASA); Department of Health and Human Services and particularly its National Institutes of Health (NIH); DOE; and National Science Foundation (NSF)—also participate in the STTR Program. Each agency manages its own program while the SBA plays a central administrative role and issues policy directives and annual reports for each program. The legislation establishing the SBIR Program required each agency with an external R&D budget in excess of $100 million to set aside a certain percentage of this amount for the program. The percentage was increased incrementally until it reached 1.25 percent in 1986. The 1992 reauthorization legislation increased program funding to not less than 1.5 percent for fiscal years 1993 and 1994, not less than 2 percent for fiscal years 1995 and 1996, and not less than 2.5 percent for fiscal year 1997 and thereafter. This increase will effectively double the funding for the program to nearly $1 billion in fiscal year 1997. In establishing the STTR Program, the legislation required each agency with an external R&D budget in excess of $1 billion to set aside not less than 0.05 percent of that budget in fiscal year 1994, not less than 0.1 percent in fiscal year 1995, and not less than 0.15 percent in fiscal year 1996 for the STTR Program. In the first year of the program, the agencies expended about $20 million; they estimate that funding will triple to $60 million in the third and last year of the pilot program. SBIR and STTR funding is provided in two phases. Phase I is intended to determine the scientific and technical merit and feasibility of ideas; it generally lasts about 6 months for SBIR and 1 year for STTR. Phase II further develops the proposed ideas and generally lasts about 2 years. The 1992 reauthorization directed SBA to set the general limits on the size of SBIR phase I and II awards at $100,000 and $750,000, respectively, although awards may be for less than these amounts. It also set the general limits for STTR awards at $100,000 and $500,000, respectively. A third phase for SBIR and STTR projects, where appropriate, involves the continuation or commercial application of the R&D. Although the two programs have many points in common, they differ in one important respect. To be eligible for an STTR award, a small business must collaborate with a nonprofit research institution such as a university, a federally funded research and development center, or other entity. This collaboration is permitted under the SBIR Program but is not mandatory. This special STTR requirement, according to a 1992 report, was to provide a more effective mechanism for transferring new knowledge from research institutions to industry. In addition to the two reports we have already provided, the legislation directed GAO to report on SBIR in 1997; the upcoming report will be a detailed study covering all of the major issues affecting the program. The quality of the proposed research in both programs was one of the principal issues discussed in our reports. In general, we believe the quality of the winning SBIR and STTR proposals is favorable. For the SBIR Program, the quality of research proposals appeared to have kept pace with the program’s initial expansion. However, at the time of our March 1995 report, it was too early to make a conclusive judgment about the effect of the funding increases on the quality of SBIR research proposals receiving awards because only the first (and smallest) of the three slated increases had occurred at the time of our report. In general, the level of competition for SBIR awards remained high following the initial increase in funding in fiscal year 1993. In all five major agencies during fiscal year 1993, the number of proposals rose between 9 and 30 percent. The increased numbers of proposals were important in maintaining the competitiveness of the program during the first year that the program’s funding percentage grew to 1.5 percent. In addition, the ratio of awards to proposals within each agency remained fairly constant, ranging from 8 percent (for DOE) to 28 percent (for NIH). Among all five agencies, the data for fiscal year 1993 showed virtually no change in the ratio from the previous 2 years, suggesting that the funding increase exerted no adverse effect on the competitiveness of the program. In addition, agencies deemed many more SBIR proposals worthy of award than they were able to fund. In some agencies, the large number of worthy but unfunded projects greatly exceeded the number of projects receiving awards; for example, the Air Force deemed 1,174 proposals worthy of award in fiscal year 1993 but funded only 470. In general, the data showed substantial reserves of projects deemed worthy of funding but receiving no award. In addition, SBIR program officials in the five major agencies stated that, in their view, the quality of research proposals was being maintained or even improved. They cited the level of competitiveness and the large reserves of unfunded but worthy projects as the principal reasons for their view. Technical evaluations of STTR proposals, which served as the basis for the selection of winning proposals, also showed favorable views of the quality of proposed research. Nevertheless, it was too early for us to make a conclusive judgment about the quality because of the newness of the program. We reviewed all of the evaluations for each of the 206 winning STTR proposals in fiscal year 1994, the first year in which awards were made. The evaluations (1) rated proposals among the top 10 percent of research in certain agencies, (2) awarded perfect scores to many proposals, (3) described some proposals as “cutting edge,” and (4) generally found the quality of proposed research to be excellent. For example, DOE rated the quality of research in all of its winning proposals as being among the top 10 percent of all research in the agency. Of the 48 winning proposals in NIH, 14 were judged outstanding, 31 excellent, 2 very good, and only 1 good. There were none in NIH’s “acceptable” (or lowest fundable) category. In general, DOD rated its 105 winning proposals highly. Of NASA’s 21 winning proposals, 11 were considered above average, and 8 were judged as being among the top 10 percent of all NASA proposals for comparable R&D. NSF regarded the quality of research for its winning proposals as excellent. As part of our review of the quality of STTR research proposals, we also examined the technical evaluations of their commercial potential. These evaluations were generally favorable but somewhat more cautious. For example, in some cases there were concerns about the cost of the product that might result or the limited size of its potential market. Such reservations were understandable in view of the newness of the program and the innovation or risk associated with many of the proposed projects. One of the issues we discussed in our SBIR report was the duplicate funding of research proposals. According to agency officials, a few SBIR companies received funding for the same proposals twice, three times, and even five times before agencies became aware of the duplication. Several factors were contributing to this problem, including (1) the evasion of certification procedures whereby companies fail to identify similar proposals to other agencies, (2) the lack of a consensus on what constitutes a duplicate proposal, and (3) the general lack of interagency access to and exchange of current information about recent awards by other agencies. Officials from several agencies told us that the duplicate funding problem should be viewed in the context of the 20,000 or more proposals being submitted annually. They agreed, however, that the problem should be addressed. Accordingly, we made several recommendations to the Administrator of SBA, who has taken steps to implement them. One important effort has involved the development of software to provide interagency access to current information regarding recent SBIR awards. SBA officials have recently told us that they expect to make the system operational in the near future. In our STTR report, we found that the five federal agencies with STTR Programs have taken steps to avoid potential problems relating to conflict of interest with federally funded research and development centers. Such conflicts could occur if a center formed a partnership with a company submitting an STTR proposal and then helped a federal agency judge the merits of its own and other proposals. DOD, DOE, and NIH have specific policies intended to prevent such conflicts while NASA and NSF have more general procedures to avoid them. Under DOD’s policy, for example, only two R&D centers are currently approved research partners for its STTR awardees. In fact, the Air Force had to rescind some awards because the proposed research partners (certain DOD laboratories) were ineligible to participate. According to DOD’s STTR Program director, future proposals will be evaluated on a case-by-case basis to ensure that conflicts of interest do not occur. DOD and DOE, which accounted for 29 of the 32 awards involving centers during the first year of the program, have also taken steps to prevent centers from using privileged information in preparing STTR proposals. For example, DOE’s policy prohibits agency staff members from requesting or receiving assistance from personnel in research institutions (that are eligible to participate in the STTR Program) in preparing technical topics for the STTR solicitation. This policy is intended to prevent research institutions from using their expertise to influence DOE’s choice of STTR research topics. Otherwise, research institutions could acquire a significant advantage by designing topics to match their expertise and then preparing a proposal in the same area. Agency officials expressed differing views regarding the effect of STTR on SBIR and other agency R&D. For example, SBA officials contended that STTR was too small and too new a program to have any real effect on SBIR or on the broader range of agency research at the present time. The officials pointed out that the program represented only 0.05 percent of each agency’s external R&D budget during its first year and that it was only 1 year old. In contrast to the view that STTR’s effect was very limited, the Army’s STTR Program manager said that STTR was influencing SBIR in a beneficial way. In his opinion, STTR is becoming known through national conferences and other means. Furthermore, small businesses are realizing that they have more credibility and chance of winning an award by collaborating with a university or other research institution. He believes that the STTR Program has also led to more collaboration in SBIR. In general, according to the program manager, STTR is a promising program that may be as successful as the SBIR Program. The similarity of the two programs, however, raises a broader issue about the need for the STTR Program. In the 1992 House report, the Committee on Small Business provided two basic arguments in favor of the program. First, the report stated that the program addresses a core problem in U.S. economic competitiveness, the inability to translate its worldwide leadership into technology and commercial applications that benefit the economy. Second, the report stated that, although SBIR has turned out to be remarkably effective at commercializing ideas in the small business community, it is less effective at fostering commercialization of ideas that originate in universities, federal laboratories, and nonprofit research institutions. The rationale for the program, which points to certain weaknesses in SBIR and potential strengths in STTR, suggests three questions that are relevant in evaluating the need for STTR. First, is the technology originating primarily in the research institution as envisioned in the rationale for the program or is it originating in the small business? The technology may originate with the research institution, the small business, or a combination of the two. In the STTR Program, the assumption is that the research institution will be the primary originator of the new concept. However, data to determine the extent to which research institutions are providing the technologies are not currently available. Neither SBA nor the agencies have collected this information. The relative roles of the research institution and the small business as the source of the technology bear directly on the need for the STTR Program. If a high percentage of the ideas are originating with small businesses rather than with research institutions, this finding would raise questions about the need for the program. On the other hand, if a high percentage of ideas are originating with research institutions, this finding would suggest that the program was achieving the first step in moving ideas from research institutions to small businesses. Second, if the program is effective in moving ideas from research institutions to small businesses, then the next logical question concerns whether their collaboration is effective in moving them to the marketplace. This question can be approached from two directions: (1) Short-term views of how well the collaboration is working in general and (2) long-term data on actual commercialization. Information on how well the collaboration is working can be obtained in the near future. Information on actual commercial outcomes will require a greater amount of time before it can be obtained. Generally, 5 to 9 years are needed to turn an initial concept into a marketable product. Third, because one important difference between the two programs is that STTR makes a small business/research institution collaboration mandatory, the question arises whether the SBIR Program could accomplish the objective of transferring technology from research institutions to the private sector without mandatory collaboration. The rationale for the STTR Program tends to assume that such collaborations were relatively rare in the SBIR Program. However, NIH’s Program manager told us that, in an SBIR survey undertaken by NIH several years ago, collaboration between small businesses and universities was already evident in well over half of NIH’s SBIR projects. By contrast, the Army’s program manager believed that STTR’s impact will be greater in the Army than in agencies such as NIH because the Army has had a lesser degree of involvement with universities and other research institutions in the past. Given the apparent variation from one agency to another and the lack of current data, no definite conclusion can be drawn at present concerning the need for STTR in forging new collaborations. In summary, the quality of both the SBIR and STTR Programs appeared favorable at the time of our reports, although it was too early in each case to make a conclusive judgment about the long-term quality of research. In addition, the agencies have taken steps to address other concerns such as duplicate funding of SBIR projects and potential conflicts of interest in the STTR Program. Overall, the indicators relating to STTR in its first year provide evidence of a potentially promising program. More time will be needed, however, to determine whether the program is meeting a unique need or duplicating the accomplishments of the SBIR Program. Several key questions relating to the transfer of technology from research institutions to the marketplace are relevant in determining the need for the STTR Program. This concludes my statement. I would be happy to respond to any questions you or the members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the Small Business Innovation Research (SBIR) Program and the Small Business Technology Transfer (STTR) Pilot Program. GAO noted that: (1) the quality of winning SBIR and STTR proposals appears to be good, but it is too early to assess the programs' actual results; (2) the increase in available SBIR funding did not appear to affect the program, since the competition for SBIR funding was high and the agencies' ratios of awards to proposals remained essentially constant; (3) despite the increased funding, many SBIR proposals that were deemed worthy did not receive funding; (4) the Small Business Administration is taking steps to implement recommendations to reduce duplicate funding of similar research; (5) the five agencies responsible for STTR programs are taking steps to prevent entities that submit proposals from evaluating their own and other proposals; (6) there is no consensus on the need for and effect of the STTR program on agency research and development and it will take years before its effectiveness in transferring technology from research institutions to the marketplace can be assessed; and (7) assessments of STTR programs must determine whether innovative ideas are originating with the research institutions more than the small businesses, their collaboration is effective in moving technology to the marketplace, and the SBIR program could accomplish the technology transfers without mandatory collaboration.
|
To achieve national objectives, the federal government relies on complex networks and partnerships across federal, state, and local governments. Grants are one tool the federal government uses to achieve national priorities through nonfederal parties, including state and local governments, educational institutions, and nonprofit organizations. Federal grant outlays to state and local governments have generally increased as measured in constant fiscal year 2015 dollars from $230 billion in fiscal year 1980 (or $91 billion in nominal dollars) to $624 billion in fiscal year 2015 (see fig. 1). Of the approximately $275 billion in non- Medicaid grants to state and local governments in fiscal year 2015, almost $186 billion in annual appropriations went to fund discretionary grant programs, a portion of which were competitively awarded. Grants to state and local governments represented 16 percent of federal spending in fiscal year 2015. Competitively awarded federal grants generally follow a life cycle comprising various stages: (1) pre-award (public notice and application); (2) award; (3) implementation; and (4) closeout. Once a grant program is established through legislation—which may specify particular objectives, eligibility, and other requirements—a grant-making agency may impose additional requirements on recipients. OMB’s Uniform Guidance establishes several requirements for competitive grant awards, including that federal awarding agencies: (1) notify the public of the grant opportunity through an announcement, or public notice, which includes providing the applicant with sufficient information to help them make a decision about whether to submit an application and the criteria used to evaluate the application; (2) establish a merit-review process for competitive grants; and (3) develop a framework for risk assessment of applicants for competitive grants. The pre-award process varies from grant to grant, but it generally involves preparing and posting the public notice on the federal government’s web portal, Grants.gov, development and submission of the application by applicants, review of applications by the agency, an external panel, or both, and agency award decisions (see fig. 2). During the application review, the Uniform Guidance recommends that applications should be rated against pre-established criteria found in the public notice used to evaluate merit. This rating can be either quantitative (i.e., percentages or points) or qualitative (e.g., identifying applications as highly recommended or not recommended). The Uniform Guidance directs that agencies disclose in their public notice the relative weights or point values assigned to the merit based criteria, providing applicants with information about how the criteria will be applied. After applications are rated by agency officials, an external panel, or both, the applications may be ranked. Applications recommended for funding are forwarded to an awarding official within the agency. In the award stage, the agency identifies successful applicants and announces award funding. The implementation stage includes payment processing, agency monitoring, and recipient reporting, which may include collection of financial and performance information. The closeout phase includes preparation of final reports, financial reconciliation, and any required accounting for property. Audits may occur multiple times during the life cycle of the grant and after closeout. In addition to the requirements established by the Uniform Guidance, our prior work has identified practices federal awarding agencies should follow to ensure a fair and objective evaluation and selection of discretionary grant awards. These practices include communicating with the potential applicants before the competition begins by providing information prior to making grant award decisions on available funding, key dates, funding priorities, types of projects to be funded, competition rules such as eligibility, and technical reviews. In 2010, Congress included a provision in statute for GAO to identify programs, agencies, offices, and initiatives with duplicative goals and activities within departments and government-wide and report to Congress annually. Since March 2011, we have issued annual reports to Congress in response to this requirement. The annual reports describe areas in which we found evidence of fragmentation, overlap, or duplication among federal programs. In these reports we establish these definitions: Fragmentation refers to circumstances in which more than one federal agency is involved in the same broad area of national need and opportunities exist to improve service delivery. Overlap exists when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. We have stated that overlap might not necessarily lead to actual duplication, and some degree of overlap and duplication may at times be justified. Although the grant programs reviewed for this report represent a diverse collection of federal funding opportunities, in our previous work looking at grants for scientific research, we have also used the term duplication to mean research that is scientifically unnecessary to replicate or complement prior research results, or research inadvertently conducted or funded that is very similar to other research. In 2011, OMB created COFAR, an interagency council charged with providing policy-level leadership for the grants community and implementing reforms to improve the effectiveness and efficiency of federal grants. COFAR’s activities include providing recommendations to OMB on policies and actions necessary to effectively deliver, oversee, and report on grants and cooperative agreements, as well as sharing with executive departments and agencies best practices and innovative ideas for transforming the delivery of this assistance. COFAR is intended to identify emerging issues as well as challenges and opportunities in grants management and policy, including, as appropriate, improvements to the competitive grant-making process. COFAR is also to serve as a clearinghouse of information on innovations and best practices in grants management and, as appropriate, to sponsor and lead new efforts for innovation. The council includes the OMB Controller and officials from the eight executive agencies that provide the largest amounts of financial grants assistance: the Departments of Agriculture, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Labor, and Transportation. In addition, in order to represent the perspectives of other agencies that administer grants and cooperative agreements, COFAR includes a senior policy official from one other agency, selected by OMB, to serve a 2-year term (see fig. 3). In a 2013 report, we found that COFAR had not released to the public an implementation schedule that included key elements such as performance targets, mechanisms to monitor, evaluate, report on progress made toward stated priorities, and council members who can be held accountable for those priorities. We recommended that the director of OMB, in collaboration with the members of COFAR, develop and make publicly available an implementation schedule that includes performance targets, council members who can be held accountable for priorities, and mechanisms to monitor, evaluate, and report on results. The Uniform Guidance gives agencies flexibility to design their merit- review process but states that all criteria used to influence final award decisions should be clarified in the public notice for applicants, and public notices should include the relative weights the agency will apply to these criteria. The public notices for the programs we reviewed in four of six subagencies included merit-based criteria for evaluating applications and the related maximum point values to be applied to these criteria. We found that common review criteria at these four selected subagencies included the merit of the project design, previous organizational experience with the type of program, and whether the financial and budget support seemed appropriate. For example at USDA, NIFA’s Hispanic Serving Institutions Education Programs and the three FNS programs we reviewed clearly articulated the review criteria and related point values in the public notices. At NIH, all four research grant programs we reviewed included the same general criteria in their public notices and the same scoring values. In addition, we found that all three CDC grant programs included the same general review criteria in their public notices and clearly articulated the related point values. However, at the selected Interior subagencies (NPS and FWS), we found several public notices in which the criteria for evaluating applications or the related point values for each criterion were not clearly stated. One of the three public notices for grant programs we reviewed from NPS did not inform the applicants of the review criteria or their related weights. Specifically, we found in our review that the public notice for NPS’s Historic Preservation Fund Grants to Underrepresented Communities (HPF) requested standard grant application information from applicants, but it did not include the evaluation criteria or the related points. NPS officials explained, and we confirmed, that this had been corrected in the public notice for HPF fiscal year 2016 grants, which included both the criteria and scoring. Further, two of the three public notices from FWS did not inform applicants of weights related to each criterion used to evaluate grant applications. Specifically, FWS’s Aquatic Invasive Species (AIS) grant program clearly identified the criteria by which applicants would be evaluated in the public notice. However, the AIS program public notice did not include maximum point values related to each criterion. Similarly, FWS’s Conservation Program to Introduce Youth to Natural Resource Conservation included all review criteria within the public notice, but it did not include the related weights of the evaluation criteria. The program staff agreed that the criteria point values should have been explained to the applicants in the public notice, and they said the point values would be included in the grant public notice for subsequent years. Unless the criteria and value that will be assigned to those criteria are made transparent to the applicant before an application is submitted, the applicant may not know whether their proposal will meet the review criteria or how to best focus their efforts. Additionally, applications that better align with review criteria can facilitate a more effective and efficient merit-review process for federal awarding agencies. The Uniform Guidance also states that if agencies will consider cost sharing and any other program policy factors that may be used to determine federal award decisions, these factors must be explicitly described in the public notice. Under federal research proposals, voluntary committed cost sharing is not expected. However, it and other program policy factors may be considered during a merit review if this factor is explicitly described in the public notice and in accordance with agency regulations. We determined that four of the six subagencies we reviewed clearly articulated cost sharing requirements in their public notices for selected grant programs. For example, the public notices for all NIH and CDC grant programs in our review stated either that cost sharing was not required or that it would be used in the grant review process and described how it would be used. NIFA and FNS clearly explained cost sharing requirements and how they would be used in the grant review process. In contrast, the public notices for grant programs at the two selected Interior subagencies discussed cost sharing and matching but did not clarify when or how this factor would influence the subagency’s final grant award decisions. For example, the public notices for FWS’s AIS grant stated that the use of matching grant funds was not a requirement. However, the AIS public notice also encouraged both matching funds and partnerships to augment project resources and said these factors would be considered in the applicant ranking process. AIS grant staff told us that they had planned to use cost sharing or matching only to break tie scores for applicants, but that it was never used in the evaluation of applicant proposals since no ties in the scoring occurred. The use of the cost sharing or matching statement in the public notice—without explanation for how this factor would be used to evaluate applicants—reduced the transparency of the grant review process because the way in which this information was to be assessed was unclear to applicants. The AIS grant staff told us that language regarding the consideration of cost sharing or matching had been removed from all subsequent AIS public notices. Similarly, in our review of the public notice for NPS’s HPF grant program, we noted that although the grant program did not require cost sharing or matching from applicants, information on cost sharing or matching funds was requested in the budget section of the applicants’ grant proposals. However, no information was provided on how cost sharing or matching funds would affect the final award decisions. In addition, the HPF grant reviewer guidance available at the time we reviewed the grant program instructed the reviewers to include cost sharing as part of their consideration for the budget criteria, which is scored. Consideration of cost sharing or matching is also included in the final scoring spreadsheets, indicating that it was a quantitative factor in the evaluation process and affected final award decisions. Uncertainty and confusion about the impact of cost sharing and matching could discourage applicants from submitting proposals for the HPF grants. The new guidance NPS issued, which was not available at the time we selected the HPF program for review, addresses the issue. There are risks associated with grant applicants’ ability to implement the proposed grant project while also having financial controls in place to appropriately account for federal funds. For competitive grants, federal awarding agencies must have a process for assessing the risk posed by applicants and the process is to be conducted prior to applicants receiving a grant award. The Uniform Guidance states that federal awarding agencies may consider the following as they assess applicant risk: financial stability; quality of management systems; history of performance; reports and findings from audits; or the applicant’s ability to implement statutory, regulatory or other requirements imposed on nonfederal entities. To support this requirement, officials from the six selected subagencies in our review described various management tools they use to assess the risk level of their grant applicants, including the use of internal risk assessment review forms; risk assessment checklists; and review of OMB-designated repositories of government-wide eligibility qualification and financial integrity information. CDC officials said that the agency is updating its current risk assessment framework to implement a more systematic approach that follows the Uniform Guidance for assigning all grantees a risk score. CDC grant applicant risk assessments were conducted after award recommendations were made by the peer review panels. CDC officials said their grant risk assessments included reviewing internal CDC databases for grantee history, audit reports, financial data, Single Audit reports through the Federal Audit Clearinghouse, reports from the System for Award Management, and internal CDC and HHS agency-wide grant award history. CDC officials also explained that grant applicants receiving continuation funding from previously awarded grants receive a more streamlined risk review using only the Federal Audit Clearinghouse and System for Award Management reports. CDC officials also noted that new CDC grant applicants are more likely to be labeled as high- or medium-risk due to uncertainty regarding their financial and grant performance history. In addition, CDC grant application reviewers use a checklist that includes some of the factors for reviewing applicant risk among other grant award requirements. NIH grant staff explained that they use the System for Award Management and the Federal Awardee Performance and Integrity Information System to determine if potential exclusions, prior performance, or business ethics issues exist. In addition, NIH staff said they used the National External Audit Review to look for negative audit findings involving applicants, and they document the results of an applicant’s risk assessment in NIH’s grant management checklist. Similar to CDC, NIH grant staff review new applicants more closely and may require them to complete a separate form to assess their financial systems, according to the staff with whom we spoke. FNS has a pre-award risk assessment policy that requires applicants to submit a questionnaire entered into a pre-award assessment tool which triggers a number of flags that designate the applicant risk levels. FNS staff said that pre-award risk assessment is a two-part process that includes analysis of its risk assessment tool and a separate review of the federal repositories along with other information provided in applicant forms. According to NIFA officials, all grant programs at NIFA have risk assessments that are conducted through the subagency’s Awards Management Division (AMD). AMD staff told us they use a risk assessment form to request an applicant’s financial information for the previous 2 years, articles of incorporation, subsidiaries and all other affiliations. AMD staff explained that they assess risk levels for all applicants and when they determine an applicant to have a high risk level, grant funds are restricted or may not be awarded to the applicant. FWS officials said that a risk assessment must be performed for every applicant that receives a grant award. The risk assessment results are categorized using a table with descriptions of low, medium, and high risk levels that determine related grant monitoring activities. According to FWS risk assessment guidance, if a grantee receives a final high risk level or there are other concerns, the grantee may receive more frequent monitoring and site visits may be required if the program determines it should be a condition of the award. FWS officials also explained that some FWS grant programs conduct the risk assessments in the initial screening process and other grant programs conduct them closer to the decision for grant awards because the flexibility allows the program to apply the assessment when they determine it is necessary. FWS officials said that changes are being made to their fiscal year 2017 risk assessment process, which will require an enhanced assessment of recipient financial recordkeeping capabilities and additional review of Single Audit and other publicly available information. In early 2016, NPS established procedures for assessing an applicant’s financial risk by making the grant program awarding officer responsible for risk assessments and giving that awarding officer the flexibility to use a variety of tools for assessing applicant risk. Based on the pre-award risk assessment, the awarding officer must assign the recipient a risk rating of high, moderate, or low. High risk designations made by the awarding officer must include the rationale and any specific conditions imposed on the grantee. According to officials, NPS must approve the high risk designation and the awarding officer must send out a written notice to the recipient stating the reason and actions needed to remove the applicant from the high risk designation list. Like FWS, NPS officials explained that they are making the same changes to their fiscal year 2017 risk assessment process because the Department of the Interior will be revising its agency-wide risk assessment policy. Two of the selected subagencies in our review have guidance in place instructing grant management staff to review applicants’ other funding for potential duplication and overlap prior to making a grant award. In contrast, the other four selected agencies relied on informal mechanisms to identify potential duplication and overlap in applicants’ grant funding. While OMB’s Uniform Guidance does not direct agencies to review applicants for duplicative or overlapping funding, federal standards for internal control state that management should use quality information to achieve objectives, including relevant data from reliable internal and external sources. Internal control standards also state that management should document its policies. To the extent that grant-making agencies have as an objective that their awards are not overlapping or duplicative—or otherwise acknowledge that identifying unnecessary duplicative or overlapping grant funding supports effective stewardship of federal grant dollars—these agencies would benefit from consistent approaches to collecting this data and from documenting these approaches in agency policy. We did not identify any instances of overlap or duplication at the grant award level in our review of the 19 grant programs (that is the applicant receiving funding for the same or similar work). However, our previous work has pointed to potential risks that can arise—such as awarding duplicative grants—when agencies do not have written guidance in place to direct staff to check for duplication when making competitive award decisions. For the four subagencies that we found did not have formal approaches to identifying overlapping or duplicative grant funding, developing guidance for this purpose would help mitigate the potential risks we previously identified. We found that two of the selected subagencies in our review, NIH within HHS, and NIFA within USDA, had guidance in place instructing grant management staff to review applicants for potential duplication and overlap prior to making a grant award. NIH’s Grants Policy Statement and “just in time” review process and NIFA’s Federal Assistance Policy Guide instruct grant managers to review pending grant awards for scientific, budgetary, and commitment overlap to ensure prior to issuing an award that potential duplication of the award is addressed. These guidance documents clearly instruct grant-making staff that duplication and overlap in the pending grant award, whether scientific, budgetary, or commitment, is not permitted. NIH officials explained that they implement their guidance on identifying duplication and overlap through various review mechanisms. NIH officials said that prior to making a grant award decision they request information on active and pending “other support” for all key personnel who would be receiving the grant funding. “Other support” includes all financial resources, whether federal, nonfederal, commercial or institutional, that would be used in direct support of each individual’s research endeavors, including but not limited to research grants, cooperative agreements, contracts, and institutional awards. After listing all support, applicants must also summarize for each individual any potential overlap with active or pending projects and for the overall application any overlap in terms of the science, budget items covered, and any individual’s time commitment. According to NIH officials, the review of the applicant’s “other support” documentation is recorded in NIH’s grant management and program checklist, a tool NIH grant managers use to ensure grant applications meet all requirements. If the research plan in the application duplicates other pending applications or an active award, the grant applicant must negotiate with NIH staff concerning which grant will be funded. If NIH staff conclude partial duplication exists, then modification of the application, other pending applications, or the active award is necessary before NIH will make the grant award. Depending on the amount of scientific overlap, staff might choose not to fund the pending application. If scientific, budgetary, or commitment overlap has been identified, NIH staff are required to document concerns along with specific recommendations for resolution. NIFA’s grant programs have taken different approaches to implement the subagency’s guidance regarding identifying duplication and overlap in grant awards. For example, NIFA’s Agriculture and Food Research Initiative has its own grant policy manual that offers detailed instructions on identifying duplication and overlap prior to making a grant award. Grant managers for the Agriculture and Food Research Initiative review applicants’ self-identified current and pending support when they apply for funds. Additionally, NIFA’s grant managers review the Current Research Information System—USDA’s primary system containing project-level information on its ongoing and completed research projects—to identify any other NIFA grants that may have funded the same project. The program staff log the date of this search and the key words used to demonstrate how they looked for duplication. Finally, Agriculture and Food Research Initiative program staff verbally confirm with the grant applicant that there is no duplication or overlap when they call the successful applicant to notify them of the award decision. In contrast to the Agriculture and Food Research Initiative’s grant policy manual and its implementation of the guidance during the pre-award review process, officials from the two other selected NIFA grant programs used other techniques to identify potential duplication and overlap. For example, officials for NIFA’s Hispanic Serving Institutions grant program explained they ask whether a grant applicant has submitted a similar application to another federal agency. If so, the applicant must report it under “current and pending support,” and prior to making an award decision, NIFA officials review the key personnel section of the application to determine if anyone affiliated with the grant application is being funded at greater than 100 percent (i.e., a review to identify commitment overlap). Four of the six selected subagencies we reviewed relied on informal mechanisms to identify duplication and overlap of grant funding rather than establish guidance for a formal process, but officials from these subagencies acknowledged the importance of trying to identify potentially duplicative and overlapping grant funding. For example, officials from three selected CDC grant programs described informal approaches taken to look for duplicative funding at the grant award level. While CDC officials explained that their process used the same grant review tools and followed the same guidelines as NIH, they acknowledged they did not use a formal mechanism to identify potential duplication of awards. CDC officials explained that CDC did not have a methodology to ensure that its grant awards did not duplicate funding for the same or similar work at the individual grantee level. According to CDC officials, grant management staff are responsible for being knowledgeable of their programs and therefore should know what other funding sources their grantees receive. These officials explained that for some programs, to address duplication and overlap grant managers may review other funding sources listed in the application and try to determine whether any duplication or overlap could exist. We found that both selected subagencies within Interior (NPS and FWS) also lacked formal guidance instructing grant management staff to review grant applications for potential duplication and overlap at the grant award level. NPS officials said they lacked a consistent process to assess potential overlap in funding across different federal agencies or even across different subagencies within Interior (e.g., between NPS and FWS). NPS officials acknowledged that such a process could be helpful to identify potential duplication and overlap across NPS grant programs. FWS officials explained they did not have an official policy to review grant applications for duplication and overlap, but it is a common topic that comes up at pre-award grant panel review meetings. Officials from NPS also pointed out that grants may intentionally fund the same type of project in order to provide grant funding to applicants in different geographical locations. While these officials said it is not likely that unintended duplication occurs, they acknowledged it is possible that grant applicants could receive unintended duplicate funding due to the lack of formal written guidance requiring grant award panels to look for duplication and overlap of funding sources as part of their reviews. Within USDA, we found that FNS lacked a formal guidance mechanism instructing grant management staff to review grant applications for duplication and overlap. However, grant management staff for the three selected FNS grant programs explained that an informal review did exist to identify potential duplication and overlap. Staff for one program described how they reached out to other grant programs with overlapping program goals through informal professional working groups in which staff from different programs meet and share information, including trying to identify potential duplicate or overlapping grant funding by sharing lists of potential grantees. COFAR has made limited progress in developing an implementation schedule for achieving its priorities, articulating roles and responsibilities for its council members, and developing a strategy for communicating with stakeholders as we recommended in 2013. In 2013, we also evaluated the extent to which COFAR reflected key features of interagency councils that effectively implement their programs, including (1) establishing implementation goals and tracking progress toward these goals, (2) identifying and agreeing on leadership roles and responsibilities for the council members, and (3) ensuring that all relevant participants are included. We recommended that the director of OMB, along with COFAR, develop and make publicly available an implementation schedule of the COFAR priorities, clarify roles and responsibilities for COFAR members, and improve efforts to develop an effective two-way communication strategy with the grant recipient community. OMB generally concurred with these recommendations. In our 2013 report, we found that COFAR needed to establish an implementation schedule and track progress toward priorities to help pinpoint performance shortfalls and suggest midcourse corrections, including any needed adjustments to future priorities and milestones. In February 2013, COFAR posted its original priorities for fiscal years 2013 through 2015 to the U.S. Chief Financial Officers Council website. These priorities were revised and reposted in December 2013. According to OMB staff, the priorities were developed through a series of COFAR meetings to ensure that the priorities reflected the way grants management issues should be framed. OMB staff told us that the priorities for fiscal years 2016 through 2017 remain largely unchanged from those for fiscal years 2013 through 2015 since COFAR has a multiyear focus. COFAR’s publicly stated priorities are shown in figure 4. The most significant changes were the elimination of “validated public financial data” from COFAR’s priorities for fiscal years 2013 through 2015 and the addition of “spending transparency” as a new priority for fiscal years 2016 through 2017. To address each priority, COFAR identified challenges, accomplishments and short- and long-term deliverables to show the implementation status of each priority in its priority document. Although COFAR released its updated priorities for fiscal years 2016 through 2017, it continues to face the same challenges that we identified in our 2013 report. As of September 2016, COFAR had not yet released to the public an implementation schedule that includes key elements such as performance targets, mechanisms to monitor, evaluate, and report on progress made toward stated priorities, and council members who can be held accountable for those priorities. For example, in the workforce development priority, COFAR reported finalizing and publishing a “Grants 101” course outline and content of several modules for the federal workforce as an accomplishment and short-term deliverable. Although COFAR developed and implemented the first three online training course modules, a mechanism does not exist to determine the extent to which the courses are used. According to OMB staff, they have not conducted a survey for users of the training, although they reported having received favorable feedback from some users. We have found that agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to better identify areas for improvement. Reporting on these activities can help decision makers, as well as stakeholders, to obtain feedback for improving both policy and operational effectiveness. In our 2013 report, we recommended that the Director of OMB, in collaboration with the members of COFAR, develop and make publicly available an implementation schedule that includes performance targets, council members who can be held accountable for priorities, and mechanisms to monitor, evaluate, and report on results. OMB generally concurred with our recommendation. We continue to believe that implementing our 2013 recommendation and developing a detailed implementation schedule can help ensure progress toward COFAR’s priorities. In 2012, we reported that when interagency councils clarify who will do what, identify how to organize their joint and individual efforts, and articulate steps for decision making, they enhance their ability to work together and achieve results. In our previous work, we have found that agencies involved in grants management reforms are not always clear on their roles and responsibilities which may cause such initiatives to languish. In 2013, we reported that COFAR lacked clearly articulated roles and responsibilities for its members. We recommended that the Director of OMB, in collaboration with the members of COFAR, clarify the roles and responsibilities for various streamlining initiatives and steps for decision making, in particular how COFAR would engage with relevant grant-making agency stakeholders and use agency resources. In response, OMB staff acknowledged that more needed to be done to clarify roles and responsibilities. As of September 2016, OMB could not provide us with any statement identifying or documentation supporting the various roles and responsibilities of the COFAR members. According to OMB staff, COFAR is an executive group designed to provide agencies with a forum to make recommendations to OMB to guide federal grant- making policy. Further, OMB staff told us that OMB does not prescribe what their roles and responsibilities will be as COFAR members. We continue to believe that implementing our 2013 recommendation and defining roles and responsibilities of COFAR members can help enhance cooperation. In 2012, we reported that failure to effectively engage with stakeholders to understand and address their views can undermine or derail an initiative. To that end, it is critical that agencies identify who the relevant stakeholders are and develop a two-way strategy for communicating with stakeholders. According to OMB staff, COFAR and the Financial Assistance Committee for E-Gov (FACE) provide a two-way communication mechanism with the entire grant-making community to engage in interactions on policy, grant-making, operational, and technical issues. OMB staff said FACE was formed in 2011 after the creation of COFAR and allows grant-making agencies to raise issues to COFAR. However, limited information is publicly available about FACE. According to a General Services Administration website, FACE is a functional community group dedicated to addressing the needs of the federal financial assistance community as it relates to collection, usage, dissemination, and display of federal financial assistance data. We interviewed officials from associations representing the grantee community, state and local governments, universities, and nonprofit recipients about their two-way communication with COFAR or FACE. Selected association officials we interviewed reported that they had interactions with OMB but did not interact directly with COFAR or FACE and were generally not familiar with FACE as a viable option for these associations to communicate with COFAR. Officials from two of the associations we interviewed said that in meeting with OMB staff about their grant-related issues, they were told that concerns raised to OMB would be brought to COFAR to be addressed. This raised concern for some association officials because they did not believe issues that were unique to their members and the grantee community were being raised and adequately represented to COFAR. For example, an association official raised concerns that since the National Science Foundation rotated off COFAR, the research community’s perspective was no longer being represented in terms of grant policy, and the research community would be better served if they were able to communicate directly to COFAR and not through OMB. In our 2013 report, we recommended that the Director of OMB, in collaboration with the members of COFAR, improve efforts to develop an effective two-way communication strategy that includes the grant recipient community, smaller grant-making agencies that are not members of COFAR, and other entities involved with grants management policy. OMB agreed with our recommendation that it needed to work with COFAR to develop an effective two-way communication strategy that includes the grant recipient community. According to OMB staff they now communicate with nonfederal entities primarily through webcasts on best practices, by participating in conferences, and by making presentations at various nonfederal organizations about implementation of the Uniform Guidance. We continue to believe that fully implementing our 2013 recommendation can help improve effective two-way communication. The more than $600 billion in federal grants to state and local governments made in fiscal year 2015 address diverse national objectives. Achievement of these objectives is in part dependent on effective implementation of merit-based processes for grantee selection. The importance of a fair and transparent process to review grant applications and appropriately assess grantee risk is essential to make competitive award decisions. The merit-review process for competitive grants can take different forms such as internal agency review or external peer review panels. The Uniform Guidance establishes requirements and guidelines that offer some opportunities to standardize the review process as well as the risk assessment framework agencies apply to grantees prior to making an award. Our review determined that selected subagencies all had merit-review processes in place for competitive awards and all had risk assessment processes established to identify potential grantee risks related to their ability to implement the proposed grant project while also having appropriate financial controls in place to appropriately account for federal funds. However, we found that not all subagencies identified in their public notices the merit selection criteria they would use, the weighted values that would be applied to those criteria, or how cost sharing would be considered, limiting the transparency of the application and review process for both the applicant and the reviewing agency. Similarly, review processes at two subagencies routinely incorporated a check for duplication and overlap at the grant award level—a useful tool to promote stronger oversight of federal grant dollars. The other four subagencies took a less formal approach to identifying potential duplication or award overlap, although they acknowledged the importance of having information about applicants’ other funding sources, if any. Requiring reviews for duplicative or overlapping awards and establishing the requirement in agency guidance would promote stronger controls to ensure federal grant funds are efficiently awarded and avoid potential duplication or overlap when necessary. In 2013, we identified certain challenges related to COFAR’s priorities and its lack of an implementation schedule. We recommended that OMB make publicly available a detailed implementation schedule for COFAR, clarify the roles and responsibilities of COFAR members, and develop an effective two-way communication strategy with relevant stakeholders. COFAR continues to lack a publicly available detailed implementation schedule and a method to evaluate and monitor its progress toward its priorities. The absence of assigned roles and responsibilities for COFAR members and a means to include all grantee stakeholder communities in grant policy development indicate that action is still needed to fully implement our prior recommendations. Implementing these recommendations would help ensure transparency and open communication with the public, federal agencies, and grantee stakeholders. 1. To improve transparency in the grant merit-review process, we recommend that the Secretary of the Department of the Interior direct the Fish and Wildlife Service to issue written guidance to require all competitive grant programs to clarify in the public notice of funding opportunity all review criteria, including cost sharing factors as relevant, and their related scores to be used to make final award decisions. 2. To reduce the risk of duplicative and overlapping funding at the grant award level, we recommend that the Secretary of the Department of the Interior direct the National Park Service and the Fish and Wildlife Service to issue written guidance that ensures their grant management staff review grant applications for potential duplication and overlap before awarding their competitive grants and cooperative agreements. 3. To reduce the risk of duplicative and overlapping funding at the grant award level, we recommend that the Secretary of Agriculture direct the Food and Nutrition Service to issue written guidance that ensures its grant management staff review grant applications for potential duplication and overlap before awarding competitive grants and cooperative agreements. 4. To reduce the risk of duplicative and overlapping funding at the grant award level, we recommend that the Secretary of Health and Human Services direct the Centers for Disease Control and Prevention to issue written guidance that ensures its grant management staff review grant applications for potential duplication and overlap before awarding competitive grants and cooperative agreements. We provided a draft of this report to the Office of Management and Budget and the Departments of Health and Human Services, the Interior, and Agriculture for review and comment. HHS provided a written response and its letter is reprinted in appendix II. The other agencies provided comments by email or orally. All the agencies agreed with the recommendations made to them. Specifically: In its written comments, HHS stated that CDC will draft guidance to reduce the potential for duplication or overlap before awarding a grant or cooperative agreement. After reviewing the draft report, Interior provided oral comments and additional documentation subsequent to the dates of our sample of grant programs and collection of documentation related to the programs that established mandatory templates for all NPS grant public notices. The templates, as updated in July 2016, require that the review criteria specify the scoring to be used and clarify whether and how cost sharing would be considered in evaluating applications. Consequently, we removed a recommendation regarding NPS’s lack of written guidance on these matters. An email from Interior’s Audit Liaison Office also states that Interior agreed to take actions to address the recommendation we made to NPS and FWS to issue written guidance that ensures their grant management staff review grant applications for potential duplication and overlap before awarding their competitive grants and cooperative agreements. In an email from the audit coordinator, USDA responded that it agreed with the recommendation made to FNS and will prepare a statement of action to address the recommendation when our report is issued. OMB staff stated in oral comments and in an email that the recommendations that we made related to COFAR in 2013, and restate in this report, are not legally required but agreed that to drive accountability it is important to promote transparency of interagency councils and for COFAR to continue to provide the public information about its priorities and progress made. We continue to believe that fully implementing our 2013 recommendations—by developing and making publicly available an implementation schedule of priorities; defining roles and responsibilities of COFAR members; and improving effective two-way communication—will enhance the transparency and accountability of an interagency council. We are sending copies of this report to the heads of the Departments of Health and Human Services, Interior, Agriculture and OMB, as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. Program objectives To improve the capacity of state and local health departments, U.S. Territories and Native American Tribal health agencies to identify, address, and close domestic drinking water program performance gaps using performance improvement activities that align with the 10 Essential Environmental Public Health Services; improve efficiency and effectiveness of drinking water programs; and, to identify and reduce exposures from waterborne contaminants. To bring public health and epidemiologic principles to the aid of populations affected by complex humanitarian emergencies. To promote optimal and equitable health in women and infants through public health surveillance, research, leadership, and partnership to move science to practice. To assist public and private nonprofit institutions and individuals to establish, expand, and improve biomedical research and research training in infectious diseases and related areas; to conduct developmental research; to produce and test research materials. To conduct and support laboratory research, clinical trials, and studies with people that explore health processes. To conduct and support laboratory research, clinical trials, and studies with people that explore health processes. To support hypothesis-, design-, technology- or device-driven research related to the discovery, design, development, validation, and application of technologies for biomedical imaging and bioengineering; to support extramural research funded by the National Institute of Neurological Disorders and Stroke; to expand and improve the Small Business Innovation Research program; to utilize the Small Business Technology Transfer program; and to conduct and support laboratory research, clinical trials, and studies with people that explore health processes. Program objectives To provide technical and financial assistance to other federal agencies, states, local governments, native American tribes, nongovernmental organizations, citizen groups, and land owners on the conservation and management of fish and wildlife resources, including minimizing the establishment, spread, and impact of aquatic invasive species. Interior/Fish and Wildlife Service To provide experiential, education, and employment program opportunities for youth of all ages to participate in conservation activities conducted by the Fish and Wildlife Service or in collaboration with other Interior bureaus. To support projects complementary to National Park Service program efforts in resource conservation and protection, historical preservation, and environmental sustainability. To provide matching grants to states for the identification, evaluation, and protection of historic properties. To utilize qualified youth or conservation corps to carry out appropriate conservation projects that the Secretary is authorized to carry out under other authority of law on public lands. To enhance the nutritional knowledge of Food Distribution Program on Indian Reservations participants underserved by the Supplemental Nutrition Assistance Program-Nutrition Education. USDA/Food and Nutrition Service To assist eligible entities, through grants and technical assistance, in implementing farm to school programs that improve access to local foods in eligible schools. USDA/Food and Nutrition Service To ensure that school nutrition personnel have the training and tools they need to plan, purchase, and prepare safe, nutritious, and enjoyable school meals. To create 30 mathematical model units (5 per grade level for grades K-5) needed for teacher use in Department of Defense Education Activity schools during the remainder of the 2015-16 school year. To promote and strengthen the ability of Hispanic- Serving Institutions to carry out higher education programs in the food and agricultural sciences. To establish a competitive grants program to provide funding for fundamental and applied research, extension, and education to address food and agricultural sciences. This grant is also funded under Catalog of Federal Domestic Assistance 93.286, 93.853, and 93.865. In addition to the contact named above, Thomas M. James (Assistant Director), Keith O’Brien (Analyst-in-Charge), Sandra L. Beattie, Crystal Bernard, Amy Bowser, Steven Flint, Joseph Fread and Michelle Sager made major contributions to this report. Other key contributors include Joseph Cook, Donna Miller, John Neumann, Cynthia Saunders, and Travis Schwartz.
|
To improve the effectiveness and efficiency of grant-making in the federal government, in 2011 OMB created COFAR, and in 2014 OMB's Uniform Guidance came into effect. This included requirements for federal agencies to establish a merit-based review process for competitive grants and to assess grant applicants' risk. GAO was asked to review the design and implementation of merit-based grant award selection. GAO reviewed the extent to which (1) selected subagencies followed certain required and recommended practices for evaluating competitive awards; (2) selected subagencies had processes to identify duplicative grant funding; and (3) COFAR has made progress in developing an implementation schedule for achieving its priorities. GAO assessed OMB and agency grant guidance for 19 grant programs at 6 subagencies—selected in part based on grant outlays in fiscal year 2014—and interviewed officials from these agencies and OMB as well as from associations representing different types of grantees. GAO's findings are not generalizable. GAO found that all 6 selected subagencies in the Departments of the Interior (Interior), Health and Human Services (HHS), and Agriculture (USDA) applied a risk assessment review before making final grant award decisions for the 19 grant programs examined, as required by the Office of Management and Budget's (OMB) Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance). GAO also found that while selected HHS and USDA subagencies generally followed certain other required and recommended practices established in the Uniform Guidance for providing specific information in their notices of public funding opportunity (public notices) announcing grants, selected Interior subagencies did not. Specifically, the Uniform Guidance recommends that public notices include (1) the merit-based criteria that will be used to assess grant applications, (2) the relative weights that will be applied to those assessment criteria, and requires (3) whether and how cost sharing will be used as a factor in assessing an application. GAO found that for several grant programs in selected Interior subagencies, public notices either did not state merit-based selection criteria, did not state the relative weights assigned to selection criteria, or did not clarify how cost sharing would be used to assess an application. Omitting this information from the public notices limits transparency for potential applicants. OMB's Uniform Guidance does not direct agencies to review applicants for duplicative funding, but federal standards for internal control state that management should use quality information to achieve objectives and that management should document its policies. GAO found that only 2 of the 6 selected subagencies (1 in HHS and 1 in USDA) had developed formal processes and guidance for identifying potentially duplicative funding. GAO's previous work has pointed to potential risks that can arise—such as awarding duplicative grants—when agencies do not have guidance in place to direct staff to check for duplication when making competitive award decisions. Officials from the other 4 subagencies (in HHS, USDA, and Interior) that relied primarily on informal processes for identifying potentially duplicative grant funding acknowledged the importance of identifying information about applicants' other funding before making final grant award decisions. The Council on Financial Assistance Reform (COFAR) updated its priorities for fiscal year 2016 but has made limited progress in planning, coordinating, and communicating its priorities. COFAR is an interagency council established by OMB to provide policy-level leadership for the grants community and to support reforms to improve the effectiveness and efficiency of federal grants. In 2013, GAO identified challenges related to COFAR's priorities and its lack of a plan to achieve implementation of these priorities and GAO recommended that OMB provide an implementation schedule for COFAR, clarify roles and responsibilities of COFAR members, and improve two-way communication with stakeholders. However, in this review GAO found that COFAR's challenges remain and it has still not (1) released an implementation schedule that includes performance targets and evaluation mechanisms; (2) established roles and responsibilities for its members; or (3) made progress in developing effective two-way communication with the grant recipient community and other stakeholders. GAO is making four recommendations to address the concerns identified at the specific subagencies, such as including required and recommended information in public notices for grant opportunities and developing guidance on reviewing applicants for potentially duplicative funding. All agencies agreed with the recommendations.
|
Corporate tax expenditures are special tax provisions that are exceptions to the normal structure of the corporate income tax system. They represent revenue the federal government forgoes from these tax provisions. The Congressional Budget and Impoundment Control Act of 1974 identified six types of tax provisions that are considered tax expenditures when they are exceptions to the normal tax structure, including exemptions, exclusions, deductions, credits, deferral, and preferential tax rates. Treasury and the Joint Committee on Taxation each compile an annual list of tax expenditures by budget function with estimates of the corporate and individual income tax revenue losses for each tax expenditure in their respective lists. Treasury and the Joint Committee on Taxation calculate separately the estimated revenue losses for each tax expenditure under the assumptions that all other tax expenditures remain in the tax code and taxpayer behavior remains constant. Thus, the estimated revenue losses do not represent the amount of revenue that would be gained if a particular tax expenditure was repealed, since repeal would probably change taxpayer behavior in some way that would affect revenue. Corporate income tax expenditures are estimated for firms organized under the tax code as C corporations, which include most large, publicly held corporations, and have their profits taxed at the entity level under the corporate income tax system (filed on an IRS form 1120). Other types of businesses, such as partnerships and S corporations, are referred to as “pass-through” entities because for tax purposes the income earned by these businesses is attributed to or passed through to the owners of the business and are in general taxed only once under the individual income tax system. Treasury and the Joint Committee on Taxation classify the estimated amounts of tax expenditures claimed (or used) by “pass- through” entities as individual income tax expenditures. The classification of a tax expenditure as corporate or individual, or both is based on the entity that claims or uses the particular tax provision or would report a type of income if it were not excluded from tax. This classification reflects what is referred to as the statutory incidence of the tax expenditure and shows which entities—corporations or individuals, or both—can directly reduce their tax liability because of the tax expenditure. However, as both Treasury and the Joint Committee on Taxation note, the benefits and outcomes from the tax expenditure may be intended to ultimately benefit people other than the direct recipients of the tax expenditure. To the extent that a tax expenditure leads to changes in the prices of particular goods or services, wages, or returns on investment, consumers, employees, or shareholders may see benefits that they otherwise would not have received. A classification based on ultimate beneficiaries would reflect what is known about the economic incidence of tax expenditures rather than the statutory incidence. credit for low-income housing investments, while claimed largely by corporate taxpayers, is intended to ultimately benefit individuals by stimulating the production of affordable rental housing and thereby enabling low-income households to obtain potentially higher quality housing for lower rent than they would have otherwise. See Tax Policy Center, How Big Are Total Individual Income Tax Expenditures, and Who Benefits from Them? Discussion Paper No. 31 (December 2008) and GAO, Understanding the Tax Reform Debate: Background, Criteria, & Questions, GAO-05-1009SP (Washington, D.C.: Sept. 1, 2005). million in tax year 2007 and 2.2 million in tax year 2000. Measured relative to the economy as a share of gross domestic product (GDP), corporate income tax receipts increased in the years leading up to the economic recession that began in December 2007. During this period they increased from 2.1 percent of GDP in 2000 to 2.7 percent of GDP in 2007 and have since decreased to 1.2 percent of GDP in 2011. Both members of Congress and the administration have outlined potential frameworks and proposals for modifying and simplifying the corporate income tax system in recent years. A number of these proposals have involved modifying or repealing some corporate tax expenditures to broaden the corporate income tax base and using the resulting revenue offset to reduce corporate income tax rates. One notable challenge for broader corporate tax reform is the degree to which changes to tax expenditures affect other types of taxpayers claiming them as most corporate tax expenditures are also available to other types of business entities other than corporations. In selecting activities to fund federal policy goals, Congress can choose to enact a tax expenditure rather than a spending program or vice versa for a variety of reasons. We have developed and highlighted criteria and questions for evaluating tax expenditures, including whether other policy tools, such as spending programs, are preferable to tax expenditures. In addition, we have long reported that, once enacted, tax expenditures and their relative contributions toward achieving federal missions and goals are often less visible than spending programs, which are subject to a more systematic review of performance, including results. More recently, the Government Performance and Results Act (GPRA) Modernization Act of 2010 (GPRAMA) established a framework for providing a more crosscutting and integrated approach to focusing on results and improving government performance, including for tax expenditures. GPRAMA makes clear that the Office of Management and Budget, in coordination with federal agencies, is to identify, among other things, the various federal agencies, program activities, and tax expenditures that contribute to the federal government’s performance plan that defines the level of performance to be achieved toward crosscutting priority goals. These crosscutting priority goals cover a limited number of policy areas as well as goals to improve management across the federal government. Moving forward, GPRAMA implementation can help inform tough choices in setting priorities as policymakers address the rapidly building fiscal pressures facing our national government. Estimated revenue losses from corporate tax expenditures have increased over the last few decades in constant dollars, as shown in figure 1. After decreasing in the years following the Tax Reform Act of 1986, due to changes to both tax rates and tax expenditures, corporate revenue losses increased thereafter. Individual revenue losses also increased, although at a faster pace than corporate revenue losses. From 1986 through 2011, the number of tax expenditures with estimated corporate revenue losses increased from 61 in 1986 to 80 in 2011, as shown in figure 2. Estimated corporate revenue losses in 2011, which totaled $181.4 billion, were approximately the same size as the amount of corporate income tax revenue the federal government collected that year. Individual tax revenue losses totaled $891 billion in 2011, and included tax expenditures used by other types of business entities that pass through to individual taxpayers. According to Treasury’s 2011 estimates, 80 tax expenditures had corporate revenue losses. Of those, two expenditures accounted for 65 percent of all estimated corporate revenues losses in 2011 while another five tax expenditures—each with at least $5 billion or more in estimated revenue loss for 2011—accounted for an additional 21 percent of corporate revenue loss estimates. Table 1 shows the seven largest corporate tax expenditures in 2011. A majority of corporate tax expenditures are not isolated to the corporate income tax system but also apply to other types of businesses, including “pass-through” entities. Of the 80 tax expenditures used by corporations, 56 were also used by individuals, as shown in figure 3. Modifying any of these 56 tax expenditures as part of broader corporate tax reform would likely affect both corporate and individual taxpayers to some degree. However, the percentage of corporate taxpayers using these tax expenditures varies considerably across the 56 tax expenditures. For example, some tax expenditures are predominately used by corporations, such as the alcohol fuels credit, with estimated corporate revenue losses comprising 96 percent of the tax expenditure’s estimated $500 million income tax revenue loss in 2011.individuals, such as the exclusion of utility conservation subsidies, where corporations account for less than 5 percent of the $220 million estimated revenue loss in 2011. See appendix II for more information on these 56 tax expenditures’ estimated corporate revenue losses in 2011. Corporate tax expenditures span a majority of federal mission areas, but their relative size differs across budget functions. The 80 corporate tax expenditures had estimated revenue losses in 12 of the 18 budget functions in 2011, as shown in table 2. Of the $181 billion in estimated corporate tax revenue losses, 81 percent was concentrated in the international affairs and housing and commerce budget functions. In these two budget functions, estimated corporate revenue losses were more than federal outlays. Corporate-only tax expenditures generally support or encourage a specific type of entity or activity. See appendix V for the reported purposes for the 24 corporate-only tax expenditures that were the basis for our identification of related federal activities. Some of these tax expenditures have multiple reported purposes that are broader and in pursuit of national priorities. For example, seven corporate-only tax expenditures are aimed at encouraging or supporting the production of specific energy sources or the development of technology or infrastructure for certain energy sectors. These tax expenditures also have broader purposes such as promoting domestic energy production and energy security. Other corporate-only tax expenditures provide support for certain types of entities, such as insurance companies or credit unions. Beyond providing support for these entities, these corporate-only tax expenditures may have additional purposes, which are different or broader in scope, related to why these types of entities receive tax support. However some of these tax expenditures were granted to entities prior to World War II and the rationale for continuing support to these entities may have changed over time, making it difficult to determine the purpose of providing the support. Our previous work highlighted the changing historical basis for one of these tax expenditures—the tax exemption of credit union income—as shown in figure 4. Of the 24 corporate-only tax expenditures, one-third appear to share a similar purpose with federal spending programs. The largest number of these tax expenditures support purposes related to energy- and natural resources and environment-related purposes. These tax expenditures can be broken into two groups: (1) those aimed at encouraging fossil fuel production and development, and (2) those that are intended to encourage technology development, renewable energy, or energy efficiency. We identified related spending programs that appear to share a similar reported purpose with tax expenditures for the second category, but none for the first. See appendix VI for a summary list of related federal spending programs and activities. While we identified related federal spending programs based on reported purposes specific to the entity or activity being supported, a number of these tax expenditures may also have one or more reported purposes that pursue broader or different aims. If a broader reported purpose was used, the extent of related spending programs identified may change considerably. For example, the bio-diesel and small agri-biodiesel producer tax credits, one of the corporate-only tax expenditures in 2011, have a specific reported purpose of encouraging production of biodiesel fuels, and we identified three federal spending programs that appear to share a similar purpose. Using a broader purpose for this tax expenditure would likely increase the number of federal spending programs identified that have a similar purpose. For example, our prior work on renewable energy identified nearly 700 federal initiatives related to renewable energy, including the bio-diesel and small agri-biodiesel producer tax credits. Applying an even broader national purpose, such as encouraging domestic energy production, could further increase the number of federal spending programs, as well as other federal activities, such as federal regulations and tax expenditures, which may share a similar purpose. Alternatively, applying a different purpose, such as supporting rural and farm areas, could lead to identifying different federal programs that may share a similar purpose. We provided a draft of this report to the Secretary of the Treasury and the Acting Commissioner of Internal Revenue for comment. Treasury provided technical comments which we incorporated; IRS had no comments on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Acting Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions on this report, please contact me at (202) 512- 9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report: (1) describes trends in the number of corporate tax expenditures and aggregate corporate revenue losses since 1986; (2) describes the use of corporate tax expenditures in 2011; and (3) compares the size of corporate tax expenditures to federal spending by budget function and, for tax expenditures used only by corporations, identifies spending programs with similar purposes. To identify how corporate tax expenditures have changed in terms of their numbers and aggregate estimated revenue losses, we analyzed tax expenditure estimates developed by the Department of the Treasury (Treasury) and reported by the Office of Management and Budget in the Federal Budget’s Analytical Perspectives for fiscal years 1986 through 2011. For this period, we determined which tax expenditures had estimates only for corporations, for both corporations and individuals, and only for individuals. We then summed the estimated revenue losses and the number of tax expenditures by taxpayer group to determine changes over time in trend data and to see how the amounts differed between these taxpayer groups.2011 constant dollars to adjust for inflation using the chain price indexes reported in the fiscal year 2013 federal budget. While sufficiently reliable as a gauge of general magnitude, summing revenue loss estimates does not take into account any interactions between tax expenditures. In addition, revenue loss estimates do not incorporate any behavioral We converted all sums for each fiscal year into responses and, thus, do not represent the revenue amount that would be gained if a specific tax expenditure was repealed. For 2011, we analyzed those tax expenditures with only estimated corporate revenue losses and those with both corporate and individual revenue losses.those that Treasury reported had estimated revenue losses for corporations in 2011; we describe any tax expenditures that Treasury estimated lost only corporate tax revenue as corporate-only tax expenditures. We did not conduct a legal analysis to determine which tax expenditures are available only to corporations. For this study, we defined corporate tax expenditures as To describe the number of taxpayers claiming corporate-only tax expenditures, we obtained estimates, where available, from the Internal Revenue Service (IRS) Statistics of Income (SOI) 2010 corporate sample on the number of corporate taxpayers claiming each of the corporate-only tax expenditures. These were the most recent estimates available at the time of our work. Data compiled by IRS SOI is based on a stratified, random sample of 63,630 corporate income tax returns for 2010 from corporations that end their corporate year from July 1, 2010, through June 30, 2011. These estimates are subject to sampling errors. Based on the number of sampled returns and the coefficients of variation provided by SOI, we estimate that the upper bound of a 95 percent confidence interval for the estimated total number is less than 830 corporations for the following tax expenditures: alternative fuel production credit, bio-diesel and small agri-biodiesel producer tax credits, credit for energy efficient appliances, employer-provided child care credit, tax credit for orphan drug research, small life insurance company deduction, and Special Blue Cross/Blue Shield deduction. For our report, IRS SOI provided data on C corporations, and we excluded S corporations that pass through to individual taxpayers. C corporations include active corporations filing tax forms 1120, 1120-F, 1120-L, and 1120-PC but not tax forms 1120S, 1120-REIT, and 1120-RIC. Overall, IRS SOI was able to provide estimates for nine tax credits or deductions because these tax expenditures were reported on an isolated line of an IRS form. However, for two of those tax credits, the requirement to protect taxpayer information forced IRS to exclude estimates of the number of tax recipients of those credits. We also obtained publicly available fiscal year 2010 data from IRS’s Tax Exempt and Government Entities division on four corporate-only tax expenditures. These corporate-only tax expenditures were tax exemptions and included the exemption of credit union income, tax exemption of certain insurance companies owned by tax-exempt organizations, exemption of certain mutuals’ and cooperatives’ income, and special alternative tax on small property and casualty insurance companies. For three of the four tax expenditures that we obtained publicly available data on the number of organizations claiming the tax exemption, the data represented only a portion of organizations to which the exemption applies as reporting requirements differ within each of these tax expenditures. From these two sources, IRS SOI estimates and publicly available data from IRS’s Tax Exempt and Government Entities division, we were able to obtain data on the number of taxpayers claiming 13 of the 24 corporate-only tax expenditures, including credits, deductions, or exemptions. IRS was not able to provide data on the remaining eleven tax expenditures generally because they could not be isolated on an IRS form. To assess the reliability of the data and estimates, we reviewed agency documentation, interviewed agency officials, and reviewed our prior reports that have used the data and estimates. While we determined that the Treasury and IRS data and estimates were sufficiently reliable for our purposes, the IRS SOI corporate sample may not provide a precise estimate of the number of taxpayers claiming a tax expenditure when the number of taxpayers is very small. To analyze how large corporate tax expenditures are in comparison to federal spending by budget function, we summed the estimated revenue losses for those tax expenditures with corporate revenue losses in fiscal year 2011 by budget function from the fiscal year 2013 Analytical Perspectives. We compared them to federal outlays by budget function in fiscal year 2011, using data from the President’s fiscal year 2013 budget. Outlay-equivalent estimates—the amount of outlays required to deliver the same after-tax income as provided through the tax expenditure—may provide for a more appropriate comparison to federal spending than estimated revenue losses if particular outlays represent taxable income to the recipient. However, Treasury no longer estimates outlay equivalent values for tax expenditures and therefore, estimated revenue losses were used instead. For this report we identified related programs based on a narrow reported purpose specific to the type of entity or activity that the corporate-only tax expenditures support or target. The corporate-only tax expenditures may have additional broader or different purposes. Using the narrow reported purposes allowed us to identify the tax and nontax programs that appear to most closely support the activity or entity the corporate-only tax expenditures support. We defined the narrow reported purpose as providing support for a certain activity or entity or in some cases, where the materials we reviewed discussed the tax expenditure as intended to encourage or increase a certain activity, we described the reported purpose as encouraging that type of activity. To identify the reported purpose of corporate-only tax expenditures, we reviewed their legislative histories and prior work by GAO, the Congressional Research Service (CRS), including the CRS’s 2010 tax expenditure compendium, and the Congressional Budget Office (CBO) that discussed the intended purpose or rationale of the tax expenditure. However, in identifying the narrow reported purposes we were not trying to discern legislative intent. For some tax expenditures, the reported purpose may be clear from reviewing these sources, while for others the reported purpose may be more difficult to determine. Some tax expenditures support a certain type of entity but the reported purpose or rationale for providing support to that entity is difficult to determine or may have changed over time. Therefore, we describe the reported purpose as support for a type of entity, and we did not reconstruct the historical basis for providing such support. To identify federal spending programs that appear to share a similar reported purpose, we searched the October 2012 version of the Catalog of Federal Domestic Assistance (CFDA) using terms from the description of the corporate-only tax expenditures. We reviewed programs listed under the same budget functions and reviewed the descriptions of the programs initially identified to determine if they appear to share a similar reported purpose. We identified related tax expenditures by reviewing the fiscal year 2013 Analytical Perspectives, supplemented with the description in CRS’s 2010 tax expenditure compendium. We obtained data from IRS for the tax expenditures identified as related, from the Department of Energy for the Integrated Biorefineries Program, and for the standards and regulations identified as related from the Department of Energy and the Environmental Protection Agency. our analysis of those data or identification of related programs. After reviewing CFDA documentation, our prior reports that have used the data, and spot checking these three data element against other publicly available agency documentation, we determined the data were reliable for our purposes. In reporting the administering or implementing agencies or entities, we generally reported the federal department level rather than the sub-department level agency except for IRS and the Environmental Protection Agency. In addition, for reporting the type of assistance, we grouped the types of assistance into the following categories: grant, cooperative agreement, direct payment, direct loan, guaranteed loan, insured loan, regulation/standard, tax exclusions, exemptions, or deductions, tax credits, deferrals of tax, and preferential tax rates. In some cases, the CFDA identified the programs we identified as providing multiple types of assistance listed above. The list of nontax programs we identified that appear to share a similar specific reported purpose has not been reviewed by the agencies responsible for them. Treasury and IRS reviewed our identification of related tax expenditures. The list of tax and nontax programs, shown in appendix VI, does not represent a comprehensive list of related federal activities for several reasons: The CFDA may not capture certain types of federal activities, such as regulations or standards. Based on our prior work, we were able to identify some regulatory programs which we believe also appear to share a similar reported purpose to the tax expenditures in our scope. The federal spending programs and activities identified may also have multiple reported purposes, and only a component of the program or activity may appear to specifically share a similar reported purpose to the tax expenditure. Using a broader or different reported purpose to identify federal activities that appear to share a similar reported purpose would likely alter the makeup of activities that appear to be related. We conducted our work from October 2012 to March 2013 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Estimated corporate revenue losses (dollars in millions) Credit to holders of Gulf Tax Credit Bonds. Exclusion of utility conservation subsidies Expensing of certain multiperiod production costs Accelerated depreciation on rental housing (20) (40) Accelerated depreciation of buildings other than rental housing (3,300) In addition, the alcohol fuel mixture credit results in a reduction in excise tax receipts of $6,520 million in 2011. In addition, the small business credit provision has outlay effects of $30 million in 2011. In addition, the provision for school construction bonds has outlay effects of $430 million in 2011. In addition, the provision has outlay effects of $20 million in 2011. In addition, the credit for holders of zone academy bonds had outlay effects totaling $10 million in 2011. In addition, the biodiesel producer tax credit results in a reduction in excise tax receipts of $760 million in 2011. These figures do not account for the tonnage tax which shipping companies may opt into in lieu of the corporate income tax. The tonnage tax reduces the cost of this tax expenditure by $20 per year in each year of the budget. Appendix IV: Number of Corporate Taxpayer Recipients for Corporate-Only Tax Expenditures, 2010 Tax expenditure Tax exemption of certain insurance companies owned by tax-exempt organizations Exemption of certain mutuals’ and cooperatives’ income Exemption of credit union income Special alternative tax on small property and casualty insurance companies Bio-diesel and small agri-biodiesel producer tax credits Small life insurance company deduction Tax credit for orphan drug research Special Blue Cross/Blue Shield deduction Credit for energy efficient appliances According to the Internal Revenue Service Statistics of Income division, estimate should be used with caution due to small number sampled returns claiming this credit. In addition to the contact name above, MaryLynn Sergent (Assistant Director), Jason Vassilicos (Analyst-in-Charge), Kevin Daly, Robert Gebhart, Lois Hanshaw, Natalie Maddox, Donna Miller, Karen O’Conor, Mark Ryan, and Elwood D. White all made contributions to this report.
|
Tax expenditures--special exemptions and exclusions, credits, deductions, deferrals, and preferential tax rates claimed by corporations, individuals, or both--support federal policy goals but result in revenue forgone by the federal government. Congress and the administration are reexamining tax expenditures used by corporations as part of corporate tax reform. GAO was asked to examine issues related to corporate tax expenditures. This report: (1) describes trends in the number of corporate tax expenditures and estimated corporate revenue losses since 1986; (2) describes the use of corporate tax expenditures in 2011; and (3) compares the size of corporate tax expenditures to federal spending by budget function and, for tax expenditures used only by corporations, identifies spending programs with similar purposes. To address these objectives, GAO analyzed Department of the Treasury estimates of tax expenditure revenue losses from 1986 to 2011 and Internal Revenue Service 2010 data and interviewed agency officials. GAO also reviewed the legislative history and relevant studies to determine the reported purpose of corporate-only tax expenditures, and searched federal program lists to identify federal spending programs that appear to share a reported specific purpose similar to corporate-only tax expenditures. The programs identified as related were not reviewed by the agencies responsible for the programs. Estimated tax revenue that the federal government forgoes resulting from corporate tax expenditures increased over the past few decades as did the total number of corporate tax expenditures. In 2011, the Department of the Treasury estimated 80 tax expenditures resulted in the government forgoing corporate tax revenue totaling more than $181 billion. Many of these tax expenditures are broadly available to both corporate and individual taxpayers. More than twothirds or 56 of the 80 tax expenditures used by corporations in 2011 were also used by individual taxpayers, such as other types of businesses not organized as corporations. Modifying any of these 56 tax expenditures as part of broader corporate tax reform would likely affect both corporate and individual taxpayers to some degree. Corporate tax expenditures span a majority of federal mission areas, but their relative size differs across budget functions. The 80 corporate tax expenditures had estimated revenue losses in 12 of the 18 budget functions in 2011. Of the $181 billion in estimated corporate tax revenue losses, 81 percent was concentrated in the international affairs and housing and commerce budget functions, exceeding federal outlays in those budget functions. The 24 tax expenditures used only by corporations in 2011 provide support intended to encourage certain activities, such as energy production, or provide support for certain entity types, such as credit unions. A corporate tax expenditure may have multiple purposes: one narrowly focused on a specific activity or entity as well as broader or additional purposes pursuing national priorities or other activities. For example, 7 of the 24 corporate-only tax expenditures are aimed at encouraging or supporting specific energy sources and technologies, and these tax expenditures may also have broader national purposes such as promoting domestic energy production and energy security. In examining their narrowly focused reported purposes, one-third of the 24 corporate-only tax expenditures appear to share a similar purpose with at least one federal spending program. GAO made no recommendations in this report. Treasury provided technical comments that were incorporated, as appropriate; IRS had no comments.
|
Woody biomass—small-diameter trees, branches, and the like—is generated as a result of timber-related activities in forests or on rangelands. Small-diameter trees may be removed to reduce the risk of wildland fire or to improve forest health, while treetops, branches, and limbs, collectively known as “slash,” are often the byproduct of traditional logging activities or thinning projects. Slash is generally removed from trees on site, before the logs are hauled for processing. It may be scattered on the ground and left to decay or to burn in a subsequent prescribed fire, or piled and either burned or hauled away for use or disposal. Woody biomass can be put to various uses. Among other uses, small- diameter logs can be sawed into structural lumber or can be chipped and processed to make pulp, the raw material from which paper, cardboard, and other products are made. Woody biomass also can be used for fuel. Various entities, including power plants, schools, pulp and paper mills, and others, burn woody biomass in boilers to turn water into steam, which can be used to make electricity, heat buildings, or provide heat for industrial processes. Federal, state, and local governments, as well as private organizations, are working to expand the use of woody biomass. Recent federal legislation contains provisions for woody biomass research and financial assistance. For example, the Consolidated Appropriations Act for Fiscal Year 2005 made up to $5 million in appropriations available for grants to create incentives for increased use of woody biomass from national forest lands. In response, the Forest Service awarded $4.4 million in such grants in fiscal year 2005. State and local governments also are encouraging the material’s use through grants, research, and technical assistance, while private corporations are researching new ways to use woody biomass, often in partnership with government and universities. The users in our review cited several factors contributing to their use of woody biomass. The primary factors they cited were financial incentives and benefits associated with its use, while other factors included having access to an affordable supply of woody biomass and environmental considerations. Financial incentives for, and benefits from, using woody biomass were the primary factors for its use among several users we reviewed. Three public entities—a state college in Nebraska, a state hospital in Georgia, and a rural school district in Montana—received financial grants covering the initial cost of the equipment that they needed to begin using woody biomass. The state college received a state grant of about $1 million in 1989, the Georgia hospital received about $2.5 million in state funds in the early 1980s, and the Montana school district received about $900,000 in federal funds in 2003 for the same purpose. A fourth user—a wood-fired power plant in California—received financial assistance in the form of tax- exempt state bonds to finance a portion of the plant’s construction. Three users in our review also received additional financial assistance, including subsidies and other payments that helped them continue their use of woody biomass. For example, the California power plant benefited from an artificially high price received for electricity during its first 10 years of operation, a result of California’s implementation of the federal Public Utility Regulatory Policies Act of 1978. Under the act, state regulators established rates for electricity from certain facilities producing it from renewable sources, including woody biomass. However, the initial prices set by California substantially exceeded market prices in some years, benefiting this user by increasing its profit margin. The Montana school district also received ongoing financial assistance from a nearby nonprofit organization. The nonprofit organization paid for the installation of a 1,000-ton wood fuel storage facility (capable of storing over a year’s supply of fuel) and financed the purchase of a year’s supply of fuel for the district, which the district repays as it uses the fuel. The third user, a Colorado power plant generating electricity by firing woody biomass with coal, realized ongoing financial benefits by selling renewable energy certificates associated with the electricity it generated from woody biomass. Energy cost savings also were a major incentive for using woody biomass among six users we reviewed. Two users—rural school districts in Pennsylvania and Montana—told us that they individually had saved about $50,000 and $60,000 in annual fuel costs by using wood instead of natural gas or fuel oil. Similarly, the state college in Nebraska typically saves about $120,000 to $150,000 annually, while the Georgia state hospital reported saving at least $150,000 in 1999, the last year for which information was available. And the two pulp and paper mills we reviewed each reported saving several million dollars annually by using wood rather than natural gas or fuel oil to generate steam heat for their processes. An affordable supply of woody biomass also facilitated its use, especially in areas where commercial activities such as logging or land clearing generated woody biomass as a byproduct. For example, the Nebraska state college was able to purchase woody biomass for an affordable price because logging companies harvested timber in the vicinity of the college, hauling the logs to sawmills and leaving their slash; the college paid only the cost to collect, chip, and transport the slash to the college for burning. Similarly, a Pennsylvania power plant obtains a portion of its wood fuel from land-clearing operations in which, according to a plant official, the developers clearing the land are required to dispose of the cleared material but are not allowed to burn or bury it. The plant official told us developers often are willing to partially subsidize removal and transportation costs in order to have an outlet for it. Thinning activities by area landowners also contributed to an affordable supply for a large pulp and paper mill in Mississippi. In this area, as in much of the southeastern United States, small-diameter trees are periodically thinned from forests to promote the growth of other trees, and traditionally have been sold for use in making pulp and paper. Further, according to mill officials, the level terrain and extensive road access typical of southeastern forests keep harvesting and hauling costs affordable—particularly in contrast to other parts of the country where steep terrain and limited road access may result in high harvesting and hauling costs. Three users cited potential environmental benefits, such as improved forest health and air quality, as prompting their use of woody biomass; other users told us about additional factors that increased their use of woody biomass. Two users—the Montana school district and the coal- fired power plant in Colorado—started using woody biomass in part because of concerns about forest health and the need to reduce hazardous fuels in forest land. They hoped that by providing a market for woody biomass, they could help stimulate thinning efforts. Another user, a Vermont power plant, began using woody biomass because of air-quality concerns. According to plant officials, the utilities that funded it were concerned about air quality and as a result chose to build a plant fired by wood instead of coal because wood emits lower amounts of pollutants. Other factors and business arrangements specific to individual users also made using woody biomass advantageous. For example, one user, which chips woody biomass for use as fuel in a nearby power plant, has an arrangement under which the plant purchases the user’s product at a price slightly higher than the cost the user incurred in obtaining and processing woody biomass, as long as the product is competitively priced and meets fuel-quality standards. Three users whose operations include chipping of woody biomass and other activities, such as commercial logging or sawmilling, also told us that having the operations within the same business is important because equipment and personnel costs can be shared between the chipping operation and the other activities. And some users helped offset the cost of obtaining and using woody biomass by selling byproducts resulting from its use. One pulp and paper mill in our review sold turpentine and other byproducts from the production of pulp and paper, while a wood-fired power plant sold steam extracted from its turbine to a nearby food-canning factory. Other byproducts sold by users in our review included ash used as a fertilizer and sawdust used by particle board plants. Users in our review experienced several factors that limited their use of woody biomass or made it more difficult or expensive to use. These factors included an insufficient supply of the material and increased costs related to equipment and maintenance. Seven users in our review told us they had difficulty obtaining a sufficient supply of woody biomass, echoing a concern raised by federal officials in our previous report. Two power plants reported to us that they were operating at about 60 percent of their capacity because they were unable to obtain sufficient woody biomass or other fuel for their plants. Officials at both plants told us that their shortages of wood were due at least in part to a shortage of nearby logging contractors, which prevented nearby landowners from carrying out all of the projects they wished to undertake. While officials at one plant attributed the plant’s shortage entirely to the lack of sufficient logging contractors, an official at the other plant stated that the lack of woody biomass from federal lands—particularly Forest Service lands—also was a significant problem. The lack of supply from federal lands was a commonly expressed concern among woody biomass users on the West Coast and in the Rocky Mountain region, with five of the seven users we reviewed in these regions telling us they had difficulty obtaining supply from federal lands. Users with problems obtaining supply from federal lands generally expressed concern about the Forest Service’s ability to conduct projects generating woody biomass; in fact, two users expressed skepticism that the large amounts of woody biomass expected to result from widespread thinning activities will ever materialize. One official stated, “We keep hearing about this coming ‘wall of wood,’ but we haven’t seen any of it yet.” In response to these concerns, officials from both the Department of the Interior and the Forest Service told us that their agencies are seeking to increase the availability of woody biomass from federal lands. Several users in our review told us they incurred costs to purchase and install the equipment necessary to use woody biomass beyond the costs that would have been required for using fuel oil or natural gas. The cost of this equipment varied considerably among users, from about $385,000 for a school district to $15 million for a pulp and paper mill. Wood utilization also increased operation and maintenance costs for some users, in some cases because of problems associated with handling wood. During our visit to one facility, wood chips jammed on a conveyor belt, dumping wood chips over the side of the conveyor and requiring a maintenance crew member to clear the blockage manually. At the power plant mixing woody biomass with coal, an official told us that a wood blockage in the feed mechanism led to a fire in a coal-storage unit, requiring the plant to temporarily reduce its output of electricity and pay $9,000 to rechip its remaining wood. Other issues specific to individual users also decreased woody biomass use or increased costs for using the material. For example, the Vermont wood-fired power plant is required by the state to obtain 75 percent of its raw material by rail, in order to minimize truck traffic in a populated area. According to plant officials, shipping the material by rail is more expensive than shipping by truck and creates fuel supply problems because the railroad serving the plant is unreliable and inefficient and experiences regular derailments. Another power plant was required to obtain a new emissions permit in order to begin burning wood in its coal- fired system. Our findings offer several insights for promoting greater use of woody biomass. First, rather than helping to defray the costs of forest thinning, attempts to encourage the use of woody biomass may instead stimulate the use of other wood materials such as mill residues or commercial logging slash. Second, government activities may be more effective in stimulating woody biomass use if they take into account the extent to which a logging and milling infrastructure to collect and process forest materials is in place. And finally, the type of efforts employed to encourage woody biomass use may need to be tailored to the scale and nature of individual recipients’ use. Unless efforts to stimulate woody biomass utilization are focused on small-diameter trees and other material contributing to the risk of wildland fire, such efforts may simply increase the use of alternative wood materials (such as mill residues) or slash from commercial logging operations. In fact, several users told us that they prefer such materials because they are cheaper or easier to use than woody biomass. Indeed, an indirect attempt to stimulate woody biomass use by one Montana user in our review led to the increased use of available mill residues instead. The Forest Service provided grant funds to finance the Montana school district’s 2003 conversion to a wood heating system in order to stimulate the use of woody biomass in the area. As a condition of the grant, the agency required that at least 50 percent of the district’s fuel consist of woody biomass during the initial 2 years of the system’s operation. Officials told us that the district complied with the requirement for those 2 years, but for the 2005-2006 school year, the district chose to use less expensive wood residues from a nearby log-home builder. It should be noted that the use of mill residues is not entirely to the detriment of woody biomass. Using mill residues can facilitate woody biomass utilization by providing a market for the byproducts (such as sawdust) of industries using woody biomass directly; this, in turn, can enhance these users’ profitability and thereby improve their ability to continue using the material cost-effectively. In addition, the availability of both mill residues and woody biomass provides diversity of supply, allowing users to continue operations even if one source of supply is interrupted or becomes prohibitively expensive. Nevertheless, these indirect effects, even where present, may be insufficient to substantially influence the use of woody biomass. Mill residues aside, even those users that consumed material we define as woody biomass often used the tops and limbs from trees harvested for merchantable timber or other uses rather than small-diameter trees contributing to the problem of overstocked forests. Logging slash can be cheaper to obtain than small-diameter trees when it has been removed from the forest by commercial logging projects—which often leave slash piles at roadside “landings,” where trees are delimbed before being loaded onto trucks. Unless woody biomass users specifically need small-diameter logs—for use in sawing lumber, for example—they may find it cheaper to collect slash piled in roadside areas than to enter the forest to cut and remove small-diameter trees. Government activities may be more effective in stimulating woody biomass use if they take into account the extent to which a logging and milling infrastructure is in place in potential users’ locations. The availability of an affordable supply of woody biomass depends to a significant degree on the presence of a local logging and milling infrastructure to collect and process forest materials. Without a milling infrastructure, there may be little demand for forest materials, and without a logging infrastructure, there may be no way to obtain them. For example, an official with the Nebraska college in our review told us that the lack of a local logging infrastructure could jeopardize the college’s future woody biomass use. The college relied on slash from commercial loggers working nearby, but this official told us that the loggers were based in another state and the timber they were harvesting was hauled to sawmills over 100 miles away. According to the official, if more timber-harvesting projects were offered closer to the sawmills, these loggers would move their operations in order to reduce transportation costs—eliminating the nearby source of woody biomass available to the college. In contrast, users located near a milling and logging infrastructure are likely to have more readily available sources of woody biomass. One Montana official told us that woody biomass in the form of logging slash is plentiful in the Missoula area, which is home to numerous milling and logging activities, and that about 90 percent of this slash is burned because it has no market. The presence of such an infrastructure, however, may increase the availability of mill residues or other materials, potentially complicating efforts to promote woody biomass use by offering more attractive alternatives. Government activities may be more effective in stimulating woody biomass use if their efforts are tailored to the scale and nature of the users being targeted. Most of the large wood users we reviewed were primarily concerned about supply, and thus might benefit most from federal efforts to provide a predictable and stable supply of woody biomass. Such stability might come, for example, from long-term contracts signed under stewardship contracting authority, which allows contracts of up to 10 years. In fact, one company currently plans to build a $23 million woody biomass power plant in eastern Arizona, largely in response to a nearby stewardship project that is expected to treat 50,000 to 250,000 acres over 10 years. Similarly, officials of a South Carolina utility told us that the utility was planning to invest several million dollars in equipment that would allow a coal-fired power plant to burn woody biomass from thinning efforts in a nearby national forest. In both cases, the assurance of a long-term supply of woody biomass was a key factor in the companies’ willingness to invest in these efforts. In contrast, small users we reviewed did not express concerns about the availability of supply, in part because their consumption was relatively small. However, three of these users relied on external financing for their up-front costs to convert to woody biomass use. Such users—particularly small, rural school districts or other public facilities that may face difficulties raising the capital to pay needed conversion costs—might benefit most from financial assistance such as grants or loan guarantees to fund their initial conversion efforts. And as we noted in our previous report on woody biomass, several federal agencies, particularly the Forest Service, provide grants for woody biomass use. However, federal agencies must take care that their efforts to assist users are appropriately aligned with the agencies’ own interests and do not create unintended consequences. For example, while individual grant recipients might benefit from using woody biomass—through fuel cost savings, for example—benefits to the government, such as reduced thinning costs, are uncertain. Without such benefits, agency grants may simply increase outlays but not produce comparable savings in thinning costs. The agencies also risk adverse ecological consequences if their efforts to develop markets for woody biomass result in these markets inappropriately influencing land management decisions. As noted in our prior report on woody biomass, agency and nonagency officials cautioned that efforts to supply woody biomass in response to market demand rather than ecological necessity might result in inappropriate or excessive thinning. Drawing long-term conclusions from the experiences of users in our review must be done with care because (1) our review represents only a snapshot in time and a small number of woody biomass users and (2) changes in market conditions could have substantial effects on the options available to users and the materials they choose to consume. Even so, the variety of factors influencing woody biomass use among users in our review—including regulatory, geographic, market-based, and other factors—suggests that the federal government may be able to take many different approaches as it seeks to stimulate additional use of the material. Because these approaches have different costs, and likely will provide different returns in terms of defraying thinning expenses, it will be important to identify what kinds of mechanisms are most cost-effective in different circumstances. In doing so, it also will be important for the agencies to take into account the variation in different users’ needs and available resources, differences in regional markets and forest types, and the multitude of available alternatives to woody biomass. If federal agencies are to maximize the long-term impact of the millions of dollars being spent to stimulate woody biomass use, they will need to design approaches that take these elements into account rather than using boilerplate solutions. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. David P. Bixler, Lee Carroll, Steve Gaty, and Richard Johnson made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The federal government is placing greater emphasis on thinning vegetation on public lands to reduce the risk of wildland fire. To help defray the cost of thinning efforts, it also is seeking to stimulate a market for the resulting material, including the smaller trees, limbs, and brush--referred to as woody biomass--that traditionally have had little or no commercial value. As GAO has reported in the past, the increased use of woody biomass faces obstacles, including the high cost of harvesting and transporting it and an unpredictable supply in some locations. Nevertheless, some entities, such as schools and businesses, are utilizing the material, potentially offering insights for broadening its use. GAO agreed to (1) identify key factors facilitating the use of woody biomass among selected users, (2) identify challenges these users have faced in using woody biomass, and (3) discuss any insights that these findings may offer for promoting greater use of woody biomass. This testimony is based on GAO's report Natural Resources: Woody Biomass Users' Experiences Offer Insights for Government Efforts Aimed at Promoting Its Use (GAO-06-336). Financial incentives and benefits associated with using woody biomass were the primary factors facilitating its use among the 13 users GAO reviewed. Four users received financial assistance (such as state or federal grants) to begin their use of woody biomass, three received ongoing financial support related to its use, and several reported energy cost savings over fossil fuels. Using woody biomass also was attractive to some users because it was available, affordable, and environmentally beneficial. Several users GAO reviewed, however, cited challenges in using woody biomass, such as difficulty obtaining a sufficient supply of the material. For example, two power plants reported running at about 60 percent of capacity because they could not obtain enough material. Some users also reported that they had difficulty obtaining woody biomass from federal lands, instead relying on woody biomass from private lands or on alternatives such as sawmill residues. Some users also cited increased equipment and maintenance costs associated with using the material. The experiences of the 13 users offer several important insights for the federal government to consider as it attempts to promote greater use of woody biomass. First, if not appropriately designed, efforts to encourage its use may simply stimulate the use of sawmill residues or other alternative wood materials, which some users stated are cheaper or easier to use than woody biomass. Second, the lack of a local logging and milling infrastructure to collect and process forest materials may limit the availability of woody biomass; thus, government activities may be more effective in stimulating its use if they take into account the extent of infrastructure in place. Similarly, government activities such as awarding grants or supplying woody biomass may stimulate its use more effectively if they are tailored to the scale and nature of the targeted users. However, agencies must remain alert to potential unintended ecological consequences of their efforts, such as excessive thinning to meet demand for woody biomass.
|
The HCTC, which pays a portion of health plan premiums for certain eligible workers and retirees, is set to expire at the end of 2013 when certain PPACA provisions, including PPACA premium tax credits, cost- sharing subsidies, and expansion of Medicaid eligibility, are implemented. The HCTC program is administered by the IRS and currently pays for 72.5 percent of health plan premiums for HCTC participants. The amount of the credit is based solely on the participant’s health plan premium amount and is not based on other factors, such as the participant’s income. As an example of the credit, an HCTC participant with an annual premium of $10,000 would receive a credit of $7,250. Individuals potentially eligible for the HCTC include manufacturing and service workers who lost their jobs due to foreign import competition and were eligible for TAA benefits (representing about 51 percent of all potentially eligible individuals), and certain retirees between the ages of 55 and 64 whose pensions from a former employer were terminated and are now paid by the PBGC (representing about 47 percent of all potentially eligible individuals). We have previously reported that many potentially eligible individuals do not participate in the HCTC program. In 2010, less than 10 percent of those potentially eligible for the program participated in the HCTC (see table 1). Some of the potentially eligible individuals may in fact not be eligible for the HCTC, for example if they are eligible for Medicare or Medicaid, or if they are covered by their spouse’s employer-sponsored health plan under certain conditions. Others may choose not to participate, for example if even with the HCTC they still cannot afford the cost of their share of health plan premiums. HCTC participants obtain coverage from HCTC-qualified health plans, which include COBRA plans, HCTC state-qualified plans, VEBA plans, and individual market plans. In 2011, the majority of HCTC participants received coverage from COBRA plans (46 percent) or HCTC state-qualified plans (37 percent). A smaller proportion of HCTC participants received coverage from VEBA plans (10 percent) or individual market plans (1 percent). Beginning on January 1, 2014, a premium tax credit will be available to help eligible tax filers and their dependents pay for qualified health plans purchased through the PPACA exchanges, to be administered by the IRS. PPACA premium tax credits will be calculated using income reported on tax returns.tax filers and their dependents who are (1) enrolled in one or more qualified health plans through a PPACA exchange, and (2) not eligible for minimum essential coverage other than coverage in the individual The credits will generally be available to eligible market. For example, individuals would not be eligible if they had coverage in a government program, such as Medicare or Medicaid, or certain employer-sponsored coverage. Tax filers eligible for PPACA premium tax credits will be those with household incomes from 100 percent to 400 percent of the federal poverty level (FPL) for the tax year in which they are receiving the PPACA premium tax credit. The amount of the PPACA premium tax credit will vary by household income level, family size, and other factors. It will subsidize a portion of the tax filer’s health insurance premiums. The tax filer’s contribution to premiums will be based on their household income relative to the FPL, and will range from 2 percent of their household income for those with household incomes from 100 percent to less than133 percent of the FPL, to 9.5 percent of household income for those with household incomes from 300 percent up to 400 percent of the FPL (see table 2). Eligibility for PPACA premium tax credits by household income level based on the FPL may vary by state because states may choose not to expand eligibility for Medicaid to nonelderly individuals whose household income does not exceed 133 percent of the FPL.rule, tax filers with household incomes from 100 percent of the FPL and up to 400 percent of the FPL will be eligible for PPACA premium tax credits. However, also under the PPACA rule, in states that expand Medicaid, individuals with household incomes from 100 percent and up to 138 percent of the FPL will be eligible for Medicaid and therefore ineligible for PPACA premium tax credits. Further, in states that do not expand Medicaid, individuals with household incomes from 100 percent and up to 400 percent of the FPL will be eligible for PPACA premium tax credits, and individuals with household incomes less than 100 percent of the FPL will not be eligible for PPACA premium tax credits and may not Under the PPACA be eligible for Medicaid, depending on their states’ Medicaid eligibility criteria. The applicable household income level expressed as a percent of the FPL determines an individual’s share of his or her annual premium. The amount of the premium for the second-lowest-cost silver plan in the PPACA exchange available in the state where the eligible individual resides will be the reference for calculating the amount of the PPACA premium tax credit. tax credit for two different people, both in a family of four, one with a household income at 150 percent of the FPL, and the other with a household income at 300 percent of the FPL, using a hypothetical annual premium of $10,000 for the second-lowest-cost silver plan (reference plan) in the PPACA exchange available in the state where they reside. The expiration of the HCTC and implementation of the PPACA premium tax credits and Medicaid expansion will affect HCTC participants’ costs for health plans in multiple ways. Projections from our analysis of 2010 IRS data show that about 69 percent of HCTC participants will likely either be ineligible for a PPACA premium tax credit or Medicaid, or will be eligible for a PPACA premium tax credit that is less generous than the HCTC. These projections show that about 37 percent of HCTC participants will likely be ineligible for either a PPACA premium tax credit or Medicaid because their incomes are too high, and 32 percent will be eligible for a PPACA premium tax credit less generous than the HCTC. On the other hand, at least 27 percent of HCTC participants will be eligible for a PPACA premium tax credit more generous than the HCTC or be eligible for Medicaid. An additional 3 percent of all participants will likely be ineligible for a PPACA premium tax credit because their incomes are too low, and their eligibility for Medicaid will depend in part on their state’s decision on Medicaid expansion (see table 4). For the HCTC participants who will likely be eligible for a PPACA premium tax credit in 2014, projections from our analysis of 2010 IRS data show that there will be variation in the extent to which their credit differs from the HCTC. For example, of the total 39,464 HCTC participants in our analysis, 6,492 will likely receive a PPACA premium tax credit at least 25 percent less than the HCTC. However, up to 12,141 participants will likely receive a credit similar to or greater than the HCTC. For example, 2,922 participants will likely receive a PPACA premium tax credit of about the same value as the HCTC (within 5 percentage points above or below the HCTC). In addition, depending in part on whether or not their state expands Medicaid, between 1,823 and 3,217 participants will likely receive a credit more than 25 percent higher than the HCTC (see fig. 1). The PPACA premium tax credit was designed to provide a larger subsidy amount to lower-income tax filers than to higher-income tax filers. Thus, lower-income HCTC participants who will likely be eligible for a PPACA will pay a smaller share of their incomes for premium tax creditpremiums under PPACA in 2014 than they did under the HCTC. For example, projections from our analysis of 2010 IRS data show that while all 2,488 HCTC participants with incomes from 100 percent to 150 percent of the FPL will likely pay between 2 percent and 4 percent of their incomes for health plan premiums under the PPACA rule, 1,456 HCTC participants—close to 60 percent of participants in the same income range—paid 9.5 percent or more of their incomes for health plan premiums under the HCTC. In contrast, while all 7,658 HCTC participants with incomes from 300 percent to 400 percent of the FPL will likely pay 9.5 percent of their household income for premiums under the PPACA rule, 2,391—over 30 percent of participants in the same income range— paid less than 4 percent of their household income for premiums under the HCTC (see fig. 2). Unlike the HCTC that pays 72.5 percent of health plan premiums, individuals eligible for a PPACA premium tax credit will continue to pay that set percentage even if premiums increase because PPACA rules limit the amount individuals will pay for premiums to a set percentage of their incomes. The expiration of the HCTC and implementation of PPACA cost-sharing subsidies will also affect HCTC participants’ out-of-pocket costs for health plans. Projections from our analysis of 2010 IRS data show that up to 28 percent of all HCTC participants who are eligible for the PPACA premium tax credit will likely also be eligible for a PPACA cost-sharing subsidy in 2014 to help pay for deductibles and copays, depending in part on whether or not their state expands Medicaid. Similar cost-sharing subsidies are not available for the HCTC, therefore, this would be an additional financial benefit for those who qualify. The effect of the expiration of the HCTC and implementation of certain PPACA provisions will likely be different for nonparticipants—individuals who were potentially eligible for the HCTC in 2010 but did not participate in it—than it will be for participants. First, nonparticipants who may not be eligible for PPACA premium tax credits or who may be eligible for tax credits less generous than the HCTC will not be losing any benefits because they are not receiving the HCTC. Our projections show that 70 percent of nonparticipants fall into this category. Second, because some individuals do not participate in the HCTC since they cannot afford to do so, some nonparticipants who will be eligible for PPACA premium tax credits that are more generous than the HCTC or who are eligible for Medicaid coverage under PPACA may choose to use these options and receive benefits they do not receive under the HCTC. Our projections show that up to 30 percent of nonparticipants fall into this category, depending in part on whether or not their state expands Medicaid and whether they meet all other eligibility criteria for the PPACA premium tax credits. In addition to being eligible for the PPACA premium tax credit, based on projections from our analysis of 2010 IRS data, up to 30 percent of all HCTC nonparticipants may also be eligible for a PPACA cost-sharing subsidy in 2014 to help pay for deductibles and copays, depending in part on whether or not their state expands Medicaid and whether they meet all other eligibility criteria for the PPACA premium tax credits. See appendix I for details on characteristics of 2010 HCTC participants and nonparticipants. The health plan coverage available under PPACA will be comparable to coverage in current HCTC-qualified health plans. Specifically, the categories of services that plans purchased through the PPACA exchanges will be required to cover are comparable to those currently covered by most HCTC plans, and the actuarial values of HCTC plans are likely above the minimum level of coverage that will be required in PPACA exchange plans. However, under PPACA, HCTC participants may have an incentive to choose plans through the exchanges that have different levels of coverage than their HCTC plans. The EHB categories that will be required for plans purchased through the PPACA exchanges are comparable to the categories of services covered in almost all of the health plans used now by HCTC participants. Specifically, the categories covered by COBRA plans as well as the four HCTC state-qualified plans and the VEBA plan that we reviewed are comparable to the EHB categories. Collectively, at least 93 percent of HCTC participants in 2011 were enrolled in these three types of HCTC plans. However, for two of the EHB categories—”rehabilitative and habilitative services and devices” and “pediatric services, including oral and vision care”—more services may be covered by the plans purchased through the PPACA exchanges than are covered by the COBRA, HCTC state-qualified, and VEBA plans. This is because many health plans, whether HCTC or other, do not currently cover habilitative services or pediatric dental and vision services. While not all of the EHB categories are covered by individual market plans, only about 1 percent of HCTC participants are covered by individual market plans. COBRA plans (46 percent of HCTC participants). COBRA plans are an extension of employer-sponsored health plans, and our analysis of data reported in a 2011 Department of Labor report found that employer- sponsored health plans generally covered services in the EHB categories. For example, in the EHB category of ambulatory care, 100 percent of employer-sponsored health plans cover physician office visits; 98 percent of plans cover outpatient surgery; and 73 percent of plans cover home health care services. In addition, the report indicated that the majority of employer-sponsored health plans cover services in the EHB categories of hospitalization, emergency services, maternity care, mental health and substance abuse disorders, and prescription drugs. Although COBRA plans generally cover services in the EHB categories, it is possible that coverage of habilitative services and pediatric dental and vision services will be more generous in plans purchased through the PPACA exchanges than in COBRA plans. According to CCIIO, the EHB categories that are commonly not covered among typical employer plans are habilitative services, pediatric oral services, and pediatric vision services. HCTC state-qualified plans (37 percent of HCTC participants). Our analysis of four 2012 HCTC state-qualified plans found that they also generally covered services in all of the EHB categories. Specifically, all of the plans we reviewed—including both the four potential exchange benchmark plans and the four HCTC state-qualified plans—covered services in the same EHB categories, such as ambulatory care, preventive care, laboratory services, hospitalization, and emergency services. In addition, all of the plans covered prescription drugs to some extent; although one of the HCTC state-qualified plans that we reviewed covered generic prescriptions but did not cover brand-name prescriptions. Among some of the HCTC state-qualified plans and the potential benchmark plans there was an absence of coverage for subsets of services in certain EHB categories, such as habilitative services and pediatric dental and vision services, which are services that will be required to be covered in plans sold through the PPACA exchanges. For example, habilitative services were not covered by two of the potential exchange benchmark plans or by three of the HCTC state-qualified plans. VEBA plans (10 percent of HCTC participants). The potential exchange benchmark plans cover the same EHB categories that the 2012 VEBA plan that we reviewed does. Also, like some of the potential exchange benchmark plans, the VEBA plan does not cover certain services that are a subset of certain EHB categories, such as habilitative services. Individual market plans (1 percent of HCTC participants). Plans purchased through the PPACA exchanges may provide coverage of EHB categories in which coverage may be more limited in individual market plans. In 2011, HHS reported that coverage of certain EHB categories is limited in individual market plans, specifically for maternity services, substance abuse services, mental health services, and prescription drugs.in individual market plans. The vast majority of HCTC participants in 2012 were likely enrolled in plans with actuarial values that were above the minimum level of 60 percent (bronze) required for plans purchased through the PPACA exchanges, including many who were likely enrolled in plans that had actuarial values of 80 percent (gold) or higher. COBRA plans (46 percent of HCTC participants). The majority of HCTC participants in COBRA plans are likely to be in plans with actuarial values of 80 percent or higher on the basis of data from two studies. One study estimated that 80 percent of all enrollees in employer-sponsored health plans in 2010 were in plans that met or exceeded 80 percent (gold). The other study estimated that about 65 percent of all employees enrolled in group health plans in 2010 were in plans with actuarial values that met or exceeded 80 percent (gold). HCTC state-qualified and VEBA plans (47 percent of HCTC participants combined). The actuarial values of the four HCTC state- qualified plans and the one VEBA plan that we reviewed in the selected states vary. However, all of the plans have an actuarial value of at least 60 percent (bronze) and three of the five plans have an actuarial value of 80 percent (gold) or higher. See table 5. Individual market plans (1 percent of HCTC participants). The small number of HCTC participants that have individual market plans are likely to have a plan with a lower level of actuarial value. A recent study found that about half of the plans (51 percent) in the individual market have an actuarial value of less than 60 percent and another third (33 percent) have an actuarial value at the 60 percent (bronze) level. The varied actuarial values of the HCTC plans suggest that the level of coverage for many HCTC participants may change after the expiration of the HCTC depending on the options available to participants and the choices they make in 2014 under PPACA. Also, the way that the PPACA tax credits will be calculated may incentivize HCTC participants to change their level of coverage. The PPACA premium tax credits will be calculated from a reference plan at the 70 percent (silver) level of coverage, so individuals who choose other plans—with either higher or lower levels of actuarial value—could face higher or lower out-of-pocket costs for premiums. However, plans with higher levels of actuarial value may result in lower out-of-pocket costs for copays and deductibles, and plans with lower levels of actuarial value may result in higher out-of- pocket costs for copays and deductibles. Ultimately, for any HCTC participant, the overall financial effect of a change from an HCTC plan to a PPACA exchange plan will be the net effect of the choice between higher or lower premium costs and higher or lower cost-sharing. Further, out-of-pocket costs for premiums and cost-sharing for HCTC participants will be affected by whether they are eligible for the PPACA premium tax credits and cost-sharing subsidies. Considering these factors, current HCTC participants may choose to change their level of coverage when the HCTC expires. For example: Some HCTC participants eligible for PPACA premium tax credits could have an incentive to change to a higher level of coverage. For example, if the HCTC participants who have coverage at the 60 percent (bronze) level of coverage are eligible for PPACA premium tax credits, they may choose a PPACA exchange plan that has a higher actuarial value than their current HCTC plan. This is because PPACA premium tax credit amounts will be calculated on the basis of the reference plan premium (the second-lowest-cost 70 percent plan) for their exchange. Given this, it could be possible for these HCTC participants to purchase a 70 percent (silver) plan that would have on average lower out-of-pocket cost-sharing expenses than their current HCTC plan. Alternatively, some HCTC participants who will be eligible for PPACA premium tax credits could have an incentive to change to a lower level of coverage. For example, if the HCTC participants who have coverage at the 80 percent (gold) or 90 percent (platinum) levels of coverage are eligible for a PPACA premium tax credit and want to purchase a plan through a PPACA exchange with a comparable actuarial value, they will have to pay the difference between the premium for a plan with an actuarial value of 80 percent (gold) or 90 percent (platinum) and their PPACA premium tax credit. Again, this is because the PPACA premium tax credit amount will be based on the reference plan premium (the second-lowest-cost 70 percent plan) for their exchange. For example, if a participant in a family of four with a household income at 300 percent of the FPL purchases a plan in a PPACA exchange with an annual reference (silver) plan premium for a family of four of $10,000, he or she would receive a PPACA premium tax credit of $3,716 and would have to pay $6,284 for the premium if he or she purchased the reference plan. However, if the participant instead decided to purchase a plan with an actuarial value of 80 percent (gold) having an annual premium of $11,000, the PPACA premium tax credit would remain the same ($3,716) but the premium amount the participant would have to pay would increase by $1,000 to $7,284. Because participants would have to pay this difference in premiums, they may opt to purchase a plan with a lower level of actuarial value than their current plan, such as a plan at the 70 percent level of coverage (silver), even though it may have higher out-of-pocket cost-sharing expenses on average than their current plan. In contrast, HCTC participants may also have an incentive to choose a plan below the 70 percent (silver) level if obtaining the lowest possible premium is their main factor in choosing a health plan. In this case, participants could choose a plan at the 60 percent (bronze) level of coverage because the premium cost would likely be lower than choosing a 70 percent (silver) level plan. However, a plan at this level would mean that on average participants could have higher out-of-pocket cost-sharing expenses than they would with a 70 percent (silver) plan. The health plan coverage options for HCTC participants not eligible for a PPACA premium tax credit will vary depending on their household income level. The HCTC participants’ not eligible for PPACA premium tax credits because their incomes are above 400 percent of the FPL could decide to purchase a health plan at any level of coverage. However, the loss of the HCTC combined with their ineligibility for PPACA premium tax credits because of their higher incomes could affect the level of coverage that they choose or even whether they purchase a health plan through a PPACA exchange or elsewhere. The HCTC participants who have household incomes below 138 percent of the FPL and live in states that expand Medicaid will not be eligible for PPACA premium tax credits; instead they will be eligible for Medicaid. However, in states that do not expand Medicaid, it is uncertain what health plan, if any, that HCTC participants who have household incomes below 100 percent of the FPL may purchase in 2014. These individuals would not be eligible for PPACA premium tax credit in any instance or Medicaid in most instances, and their ability to pay for premiums will be limited. However, because of their low incomes, these HCTC participants will likely be exempt from certain PPACA provisions, such as the tax penalty that individuals will have to pay beginning in 2014 if they do not have a health plan. We provided draft copies of this report to HHS and IRS for review, and both provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury and the Commissioner of the IRS, the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. We identified HCTC participants and nonparticipants by age groups, household income based on a percentage of the federal poverty level (FPL), and HCTC eligibility type using 2010 Internal Revenue Service (IRS) data. We found that most HCTC participants and nonparticipants were ages 55 to 64 (see table 6), over a third of participants and nonparticipants had household income greater than 400 percent of the FPL (see table 7), and more than half were potentially eligible for the HCTC because of participation in the Trade Adjustment Assistance (TAA) or Reemployment Trade Adjustment Assistance (RTAA) programs rather than being eligible by having their pension payments assumed by the Pension Benefit Guaranty Corporation (PBGC) (see table 8). In addition to the contact named above, Gerardine Brennan, Assistant Director; George Bogart; Andrew Ching; Sandra George; Alison Goetsch; Lisa A. Lusk; John Mingus; and Laurie Pachter made key contributions to this report. Trade Adjustment Assistance: Changes to the Workers Program Benefited Participants, but Little Is Known about Outcomes. GAO-12-953. Washington, D.C.: September 28, 2012. Medicaid Expansion: States’ Implementation of the Patient Protection and Affordable Care Act. GAO-12-821. Washington, D.C.: August 1, 2012. Health Coverage Tax Credit: Participation and Administrative Costs. GAO-10-521R. Washington, D.C.: April 30, 2010. Health Coverage Tax Credit: Simplified and More Timely Enrollment Process Could Increase Participation. GAO-04-1029. Washington, D.C.: September 30, 2004.
|
The HCTC pays 72.5 percent of health plan premiums for certain workers who lost their jobs due to foreign import competition and for certain retirees whose pensions from their former employers were terminated and are now paid by the Pension Benefit Guaranty Corporation. A small share of individuals who are potentially eligible for the HCTC participate. In 2010 there were 43,864 participants and 469,168 nonparticipants. The HCTC program will expire at the end of 2013 when premium tax credits and cost-sharing subsidies become available to eligible individuals who purchase health plans through health insurance exchanges under PPACA. PPACA also expands Medicaid eligibility to nonelderly individuals who meet specific income requirements to the extent that states choose to implement this provision. Therefore, the costs for health plans and coverage available to individuals potentially eligible for the HCTC will change when the HCTC expires. This report examines (1) how the HCTCs expiration and the implementation of the PPACA premium tax credit, cost-sharing subsidies, and Medicaid expansion will affect HCTC participants and nonparticipants, and (2) how the coverage that will be available through the PPACA exchanges compares to HCTC participants health plan coverage. GAO analyzed 2010 HCTC program data and individual tax filer data. GAO also compared the services and actuarial values of the plans that will be available through the exchanges to HCTC plans. Expiration of the Health Coverage Tax Credit (HCTC) and implementation of Patient Protection and Affordable Care Act (PPACA) premium tax credits, cost-sharing subsidies, and Medicaid expansion will affect HCTC participants' costs for health plans in multiple ways. Projections from GAO's analysis of 2010 Internal Revenue Service (IRS) data show that most HCTC participants in 2014 will likely be eligible for less generous tax credits under PPACA than the HCTC. Specifically, about 69 percent of HCTC participants will likely be ineligible for either a PPACA premium tax credit or Medicaid, or they will likely receive a PPACA premium tax credit less generous than the HCTC. On the other hand, GAO's analysis also found that at least 23 percent will likely be eligible for PPACA premium tax credits more generous than the HCTC. In addition to the PPACA premium tax credit, up to 28 percent of all HCTC participants will likely be eligible for PPACA cost-sharing subsidies--subsidies that will help them pay for deductibles and copays--depending in part on whether or not their state expands Medicaid under PPACA. For HCTC nonparticipants, the projections from GAO's analysis of 2010 IRS data show that as many as 30 percent may be eligible for either Medicaid or a PPACA premium tax credit more generous than the HCTC in 2014, depending in part on whether or not their state expands Medicaid and whether they meet all other eligibility criteria for the PPACA premium tax credits. In general, the health plan coverage that will be available through the PPACA exchanges will be comparable to coverage in HCTC participants' current plans; however, HCTC participants may have an incentive to choose plans through the exchanges that have different levels of coverage than their HCTC plans. Plans purchased through the PPACA exchanges will be required to provide essential health benefits--including coverage for specific service categories, such as ambulatory care, prescription drugs, and hospitalization--and most HCTC plans cover these categories of services. In addition, the vast majority of HCTC plans in 2012 likely had actuarial values--the expected percentage of costs that a plan will incur for services provided to a standard population--above the minimum actuarial value of 60 percent that health plans sold through the PPACA exchanges will be required to meet. However, because the PPACA premium tax credit amount will be based on a plan with an actuarial value of 70 percent, HCTC participants who currently have plans with either higher or lower actuarial values and are eligible for PPACA premium tax credits may have an incentive to choose plans that will have different levels of coverage than their HCTC plans. For example, those who have HCTC plans with actuarial values that are higher than 70 percent may have an incentive to shift to health plans with an actuarial value of 70 percent to avoid paying any difference in premiums that could result from choosing plans with higher actuarial values. Similarly, those who now have plans with actuarial values below 70 percent could have the opposite incentive and may purchase plans that offer a higher level of coverage than their current HCTC plans. We provided draft copies of this report to the Department of Health and Human Services and IRS for review, and both provided technical comments, which we incorporated as appropriate.
|
The nation’s 3,600 colleges, universities, and teaching hospitals continually need to construct new facilities or renovate obsolete or aging facilities. While it has been long recognized that replacing or repairing facilities is a critical problem for these schools, there are no reliable data on the extent of needed construction and renovation. Connie Lee estimates that more than $100 billion will be required during this decade to meet the need. In the 1980s, federal support of academic facilities in the form of grants and loans to schools of higher education was cut drastically. Alternative financing means such as internal funding, fund raising campaigns, and bank financing options generally were inadequate for significant construction and renovation projects. Issuing municipal bonds to finance such projects was limited mostly to schools with the highest credit standing. To help fundamentally sound but less creditworthy schools issue bonds to finance needed facilities projects, in 1986 the Congress established a college construction loan insurance corporation— Connie Lee. Municipal bonds are debt securities (long-term loans) that issuers (borrowers) sell to investors (lenders) so they can finance public projects—such as roads, airports, and public college facilities—and certain kinds of private projects—such as private college and hospital facilities—deemed to be serving public purposes. Municipal bonds may also be used to refinance existing debt. For the use of investors’ money, issuers promise to pay investors interest on specific dates and to repay the amount borrowed (principal) on a specific date or dates. Municipal bonds are issued through state and local government agencies. Repayment periods typically are 20 to 30 years. Municipal bonds are typically divided into two categories: general obligation bonds and revenue bonds. General obligation bonds usually finance public projects and are backed by the taxing authority of the state or local government that issues the bonds; the principal and interest are paid from tax receipts. The principal and interest on revenue bonds, by contrast, are generally paid from income (revenues) produced by the project that the bonds finance. For example, the principal and interest on a revenue bond issued to finance a college dormitory would be paid from fees collected from residents of the dormitory. Some municipal bonds issued to finance public projects may not be funded from state appropriations. In some states, for example, bonds issued by public colleges and universities for dormitory projects do not qualify for state backing. In these states, public schools may issue revenue bonds to finance the construction or renovation of dormitories. The interest income from municipal bonds is generally exempt from both federal income tax and state and local taxes of the state in which the bonds are issued. Because investors receive tax-free interest, they may be willing to purchase bonds that have a lower interest rate than they would otherwise require. The ability to issue tax-exempt bonds at lower interest rates may substantially reduce the issuer’s costs. To make its bonds appealing to investors, an issuer may obtain a credit rating for its bonds from a nationally recognized credit rating firm. These firms independently assess issuers’ ability to make scheduled interest and principal payments when they are due. The firms assign credit ratings to bonds as a gauge of the risk of default—nonpayment of interest or principal. Standard and Poor’s Corporation, for example, assigns ratings ranging from AAA, for the highest quality bonds, to D, for the lowest quality. Standard and Poor’s refers to bonds rated in its top four categories—AAA, AA, A, and BBB—as “investment grade” bonds. They are judged to have a high probability of on-time interest and principal payments and little risk of default. Bonds rated in categories BB to C, commonly called “junk bonds,” are referred to by Standard and Poor’s as “noninvestment” or “speculative” grade. They are considered to have a higher risk of default. Bonds rated D are in default with respect to principal or interest payments. Other credit rating firms use similar rating categories. Credit rating firms generally consider revenue bonds to have a higher risk of nonpayment than general obligation bonds because revenue bonds are not backed by the taxing authority of the state or local government and usually depend on the project funded to produce sufficient revenues to make interest and principal payments when due. In addition, Standard and Poor’s generally considers hospitals and private colleges and universities to be among the relatively riskier categories of institutions that issue municipal bonds. Private schools lack the backing of any taxing authority, and some of these schools have limited resources. The health care industry, in general, is viewed as having an uncertain future. To enhance a bond’s credit quality, an issuer may purchase insurance for its bond. Through such insurance, an insurer guarantees investors that it will pay the bond interest and principal when due if the issuer defaults. Rating firms that rate bonds also rate bond insurance companies on their ability to pay claims for defaulted interest or principal payments. Consequently, insured bonds are issued with the rating of the insurer rather than the rating of the issuer. Enhancing the quality of a bond in this manner may persuade investors to accept lower interest rates than they might otherwise. The lower interest cost may more than offset the cost to the issuer of obtaining insurance. Rating firms use a variety of criteria to evaluate the creditworthiness of bond insurers. They are judged on capital adequacy, which is the amount of capital they have relative to the amount of debt they have insured; their management experience; their underwriting policies; their financial performance; and other organizational factors. The key area in the assessment, however, is capital adequacy. Since its credit rating is the commodity that the bond insurer sells, the performance guidelines established by firms that rate it exert a considerable influence on its business practices. According to Standard and Poor’s, there are 10 major municipal bond insurance companies in the United States, including Connie Lee, which is one of the newest and smallest of the 10 companies. Nine of the companies, including Connie Lee, are rated AAA by one or more of the major rating firms. The one exception has a AA rating. The municipal bond insurance market is dominated by three insurers that have captured more than 90 percent of the bond insurance market. In 1994, insured municipal bonds totaled about $61 billion, about 37 percent of all municipal bonds that were issued. Connie Lee has approximately a 1-percent share of the overall municipal bond insurance market. It is the only major municipal bond insurance company exclusively insuring schools of higher education. Over the past 3 years, it has insured more BBB schools than any other major municipal bond insurance company. The Congress amended the Higher Education Act in 1986 to establish Connie Lee. The Congress was concerned that there were many fundamentally sound but less creditworthy schools of higher education unable to obtain bond insurance at a reasonable cost, thereby preventing them from issuing municipal bonds to finance needed facilities construction and renovation projects. Connie Lee is authorized to insure municipal bonds rated by a national rating firm at or below the lowest investment grade category—the equivalent of Standard and Poor’s BBB and below ratings—issued by schools of higher education; the proceeds of these bonds are to be used to finance the construction and renovation of academic facilities. In 1992, the Congress further amended the act to allow Connie Lee to insure a limited volume of higher rated bonds through calendar year 1997 on the condition that other municipal bond insurance companies declined to insure the bonds. Under the 1986 amendments, Connie Lee was incorporated in February 1987 as a bond insurance holding company. During 1987 and 1988, Connie Lee sold stock to the Department of Education and the Student Loan Marketing Association (Sallie Mae) and, in 1991, to a group of private investors. Currently, Education owns about 14 percent of Connie Lee’s stock; Sallie Mae owns about 36 percent; and the other stockholders own about 50 percent. Under the act, Connie Lee is managed by an 11-member board of directors: 3 appointed by Sallie Mae, 2 appointed by the Secretary of Education, 2 appointed by the Secretary of the Treasury, and 4 elected by the private stockholders. In December 1987, Connie Lee purchased an existing insurance company, which it renamed the Connie Lee Insurance Company, to carry out its insurance operations. It began insurance operations as a bond reinsurer in December 1988, when it received from Standard and Poor’s a AAA rating as a bond reinsurer. It began operating as a primary insurer in October 1991 when it received from Standard and Poor’s a AAA rating as a primary insurer. The Connie Lee Insurance Company is authorized to operate in 49 states, the District of Columbia, and Puerto Rico. As of October 1995, proposed legislation under consideration by the Congress would sever Connie Lee’s relationship with the federal government, a process commonly referred to as “privatization.” If enacted, such legislation could change the way in which Connie Lee operates and could affect the types of projects it insures. Between October 29, 1991, the date Connie Lee sold its first primary insurance, and September 30, 1995, Connie Lee insured 95 bonds, totaling about $2.6 billion, for colleges, universities, and teaching hospitals. Of these, 90 were rated BBB, the lowest investment grade rating, and 5 were rated A or better. None was noninvestment grade at the time it was insured; that is, none was rated below BBB, although several have subsequently received ratings in the noninvestment grade category. Almost all were revenue bonds. The five bonds rated A or better were insured after the Higher Education Amendments of 1992. According to Connie Lee officials, each bond was refused insurance by other insurance companies before Connie Lee agreed to insure it, in accordance with the 1992 amendments. The bonds did not exceed the limits on the amount of A-rated business imposed on Connie Lee by the act, as amended. In addition to the 95 bonds that Connie Lee insured, at the end of September 1995, 17 schools were considering whether to accept Connie Lee’s offer of insurance. As of September 30, 1995, Connie Lee had declined to insure 406 bonds because of concerns it had about the schools’ ability to repay bond principal and make interest payments as scheduled. It had also rejected an undetermined number of bonds for insurance because the bonds’ credit ratings were A or better. Data are not available on those applications for insurance because Connie Lee does not maintain such data. Since October 1991, at least 23 HBCUs have approached Connie Lee about obtaining bond insurance for 25 bonds. As of September 30, 1995, Connie Lee had insured only one bond for an HBCU—a BBB-rated $2.2 million bond insured in July 1994 for a 4-year public university. Standard and Poor’s has since downgraded that bond into the noninvestment grade category. As of September 30, 1995, this school was continuing discussions with Connie Lee about insuring a second bond, but Connie Lee had not decided whether to offer insurance for the bond. In addition to the bond it insured, Connie Lee offered to insure seven other bonds for HBCUs. As of September 30, 1995, two HBCUs were considering whether to accept Connie Lee’s offers. The remaining five HBCUs did not accept Connie Lee’s offers. Three of the five schools purchased insurance from other companies, whose premiums were lower than Connie Lee’s, school officials said. Another school had no record of Connie Lee’s having quoted a rate to them; it purchased insurance from another company, the school said. The fifth school had issued its bond without insurance and Sallie Mae had bought the bond issue, school officials said. Connie Lee did not insure bonds for at least 16 HBCUs that applied. It declined to insure three bonds because of concerns about the schools’ credit status. Although data are not available, Connie Lee estimated that it turned down at least 13 HBCUs because the schools’ A or better rating made them ineligible for insurance. At the time, federal law limited Connie Lee to insuring bonds rated BBB or below. At two of the three HBCUs that Connie Lee declined to insure, school officials said the schools were denied insurance because Connie Lee believed their student loan default rates were too high. One of the two schools reported on its application to Connie Lee that its default rates in fiscal years 1989, 1990, and 1991 were 27 percent, 32 percent, and 25 percent, respectively, officials of this school said. The second school’s rates, as reported to Connie Lee, were 25 percent, 32 percent, and 18 percent for the same 3 years, this school said. The third school was unable to provide us with information about why it was denied insurance. Those who had filed the application were no longer at the school and no record of it could be located, according to school officials. Connie Lee offered to insure a second bond for this school; it is one of the two HBCUs that, as of September 30, 1995, was considering whether to accept Connie Lee’s offer of insurance. According to Connie Lee, a school’s Federal Family Education Loan Program (FFELP) (formerly the Guaranteed Student Loan Program) default rate is a critical element in Connie Lee’s decision whether to insure a bond for the school. Connie Lee believes that private schools that rely on funds provided by student loans for a significant portion of their revenues and also have high student loan default rates are at greater risk of defaulting on bond interest and principal payments. The Department of Education uses a school’s student loan default rate to determine the school’s eligibility to participate in FFELP. In 1990, the Congress established a process that Education can use to bar schools with high student loan default rates from continuing to participate in FFELP. Each year, Education assesses a school’s eligibility, which is based on that school’s three most recent available annual loan default rates. To remain eligible, a school’s default rate must be below the statutory threshold in at least 1 of the last 3 consecutive fiscal years. The threshold for determining a school’s eligibility was 35 percent in fiscal years 1991 and 1992 and 30 percent in 1993. Beginning in fiscal year 1994, the threshold has been 25 percent. HBCUs are exempt from the default rate eligibility requirements until July 1, 1998. There are several reasons why some colleges, universities, and teaching hospitals have not obtained bond insurance from Connie Lee. Federal and state law and industry practices impose limits on Connie Lee. In addition, for many schools, bonds or bond insurance is unnecessary or unsuitable. Federal law limits Connie Lee to a defined sector of the bond market: bonds generally rated BBB and below, issued by colleges, universities, and teaching hospitals, to finance academic facilities. Many schools, however, are financially strong. If they issue bonds, their bonds most likely would be rated above BBB. In addition, public schools’ bonds that are fully or partially backed by the state in which they are located are usually rated A or better because of the state’s A or better rating. Connie Lee generally is unable to insure bonds rated A or better. States require municipal bond insurance companies that operate in them, including Connie Lee, to have a specified percentage of their business in investment grade categories, that is, bonds rated BBB or above. In effect, the highest percentage required by any state in which a company operates sets the minimum standard for the company for all states in which it operates. Because two states in which Connie Lee operates require bond insurance companies to have 95 percent of their business in the investment grade categories, Connie Lee must meet this 95-percent standard in all jurisdictions in which it operates. As previously discussed, credit rating firms, because they determine the rating that a bond insurance company can confer on bonds, have influence over the business practices of bond insurers. Credit rating firms’ guidelines effectively restrict the amount of noninvestment grade business an insurer can have and still maintain its rating. For example, under Standard and Poor’s guidelines, Connie Lee, as well as the other bond insurance companies it rates, should have at least 50 percent more capital for any noninvestment grade business than for investment grade business. Many colleges, universities, and teaching hospitals do not need to issue bonds or obtain bond insurance. In some states, public schools receive funds from the state for the construction and renovation of facilities. In addition, some public and private schools receive funds for capital projects from other sources, such as endowments. Consequently, these schools may not have to incur debt to finance the cost of capital projects. Furthermore, some schools are fiscally conservative, preferring to save for projects rather than incur debt. However, schools that consider issuing a bond must take into account such critical factors as the size of the debt to be incurred and the costs to issue a bond. Generally, it is not cost-effective to issue bonds of less than $4 million, according to Connie Lee. Costs—such as fees for financial advisers, underwriters, attorneys, credit rating firms, brokers, and others; and insurance premiums—can add substantially to the total cost of a bond. Many schools, especially small schools, either (1) do not need facilities costing millions of dollars or (2) are unwilling or unable to take on debt of that magnitude or to commit to repayment periods of 20 to 30 years. These schools may be able to obtain financing from sources such as bank loans. In addition, the credit rating of a bond may influence a school’s decision about whether to issue a bond and, if so, whether to insure it. Schools whose bonds would be rated BBB or lower may decide not to issue the bonds because the interest rates and insurance costs may be too high. On the other hand, schools, regardless of their credit ratings, may decide to issue bonds without insurance because they believe that the bonds would sell without the enhancement of insurance. Issuing a bond insured by Connie Lee is just one way to finance the construction and renovation costs of HBCUs’ academic facilities. Like non-HBCU schools, not all HBCUs need to, can, or want to issue bonds, and not all HBCUs that issue bonds need to, or can, obtain bond insurance from Connie Lee. HBCU officials described other ways in which the schools can finance their projects. Some HBCUs may issue bonds without insuring them, and some may be able to obtain bond insurance from companies other than Connie Lee. Public schools located in states with an A or better credit rating may issue bonds that have the same rating as their states. Because the 49 public HBCUs, except for one located in the District of Columbia, are located in states with an A or better rating, they may be able to issue bonds that are rated A or better. Financially strong private HBCUs also may be able to issue bonds rated A or better. However, if these schools issue bonds that are rated A or better, they may decide to issue them without obtaining insurance. As discussed earlier, one HBCU that Connie Lee offered to insure issued its bond without insurance, and four obtained insurance from other companies. As discussed earlier with respect to all schools, some HBCUs, especially small schools, either do not need multimillion-dollar facilities or are unwilling or unable to take on the large debt to finance them. Of the 102 HBCUs, 53 had an enrollment in 1993 of less than 2,000 students. Some HBCUs traditionally do not issue debt securities to finance capital projects because their managers are fiscally conservative. Typically, these HBCUs attempt to raise needed funds from alumni and friends rather than incur debt. Other HBCUs use these and other means to finance their projects: Federal and state governments, banks, and private foundations and companies make funds available to HBCUs for various purposes, including funding construction and renovation projects. For example, 27 federal departments and agencies support HBCUs to some extent. Federal programs provide funds to HBCUs for administration; research and development; faculty development; student tuition assistance; and the acquisition, construction, maintenance, and renovation of facilities and equipment. Specifically, the Department of Education provides money for HBCUs through several grant programs. For example, the Strengthening HBCUs Program provides for a $500,000 minimum allotment for each of its grant recipients. Through another program, the Department of the Interior’s National Park Service maintains part of the campus of Tuskegee University, a private school, because it has been designated a national historic site. Tuskegee recently received $14 million from the Park Service to renovate four buildings. Tuskegee also receives financial support from the state of Alabama; it received about $3 million in 1994. The United Negro College Fund, a consortium of 41 private HBCUs, is a private foundation that provides funds to member HBCUs. Since its inception in 1944, it has raised nearly $1 billion for its members. The schools may use the funds for scholarships, program and faculty development, administration, endowments, and facilities construction and renovation. Also, Sallie Mae financed in fiscal year 1994 construction projects totaling about $343 million at colleges, universities, and teaching hospitals, including HBCUs. Education’s HBCU Capital Financing program—authorized in 1992 to finance the construction and renovation of educational facilities at HBCUs—is a more appropriate vehicle than Connie Lee to serve schools with characteristics such as small size and limited resources that many HBCUs share, a number of those that we interviewed at HBCUs and at Education suggested. The program is just getting under way. The Secretary of Education has selected a private, for-profit corporation to issue bonds, the proceeds of which will be loaned to eligible HBCUs for construction projects. The corporation will issue approximately $357 million in bonds in 1995, Education estimates, and expects to make the first loans before December 31, 1995. Eligibility for the loan program is based on criteria that Education and the corporation developed. Connie Lee officials suggested several federal legislative actions that they said would help Connie Lee serve a broader range of schools, including HBCUs, targeted by its mission. For example, officials suggested that limits in federal law that restrict Connie Lee to (1) serving only schools of higher education and (2) insuring primarily bonds rated BBB and below could be removed. This would permit Connie Lee to operate the way other bond insurance companies operate; that is, to diversify its business by insuring bonds of other types of issuers and to balance the lower-rated bonds it insures by also insuring higher-rated bonds. Giving Connie Lee authority to borrow money from the federal government, as needed, to pay claims on defaulted bonds that it insured is another suggestion that Connie Lee officials offered. This could be accomplished by giving Connie Lee a direct line of credit with the U.S. Treasury or allowing it to borrow from Treasury’s Federal Financing Bank.In effect, this would be a federal guaranty—a federal reinsurance of Connie Lee—that could be made applicable only to bonds rated below BBB. The guaranty could be administered by Connie Lee through a separate subsidiary company, unrelated to the Connie Lee Insurance Company, that would not be subject to state insurance requirements. Structuring support in this way would preclude Connie Lee from violating state licensing requirements for the proportion of debt it insures that must be investment grade, Connie Lee officials suggested. A one-time federal subsidy for Connie Lee was yet another suggestion. Although Connie Lee currently has sufficient capital to continue insuring only BBB-rated bonds, it would need considerably more capital to insure bonds rated below BBB, its officials said. This is because of Standard and Poor’s high capital requirements for insuring noninvestment grade issues. A federal subsidy might take the form of a low-interest loan or a grant to Connie Lee, Connie Lee officials said. Allowing Connie Lee to make loans to schools that usually cannot issue bonds or that are unable to obtain bond insurance because their bonds would be relatively low rated was also suggested. Connie Lee would borrow money from the Treasury or the Federal Financing Bank to make loans to schools whose bonds would be rated BBB, but especially to those that would be noninvestment grade, they said. This would be a new line of business for Connie Lee and could be administered apart from its bond insurance business through a separate subsidiary company. Connie Lee officials realized that their proposed actions would require federal legislation. While we recognize that each of these actions may have its disadvantages and advantages, we did not assess their feasibility or the need for them because this was not within the scope of our review. We recognize, however, that they could increase federal costs and contingent financial liability. Connie Lee is limited to insuring bonds issued by a narrow range of schools. More specifically, federal law generally limits it to insuring bonds that are relatively greater credit risks, that is, bonds rated BBB or below. State law, however, constrains bond insurance companies, including Connie Lee, to insuring 95 percent of their business in bonds rated BBB and above. Industry practice further constrains these companies. Rating firms’ guidelines require larger amounts of capital for insuring bonds rated below BBB than for insuring bonds rated BBB and above. Among those schools that Connie Lee is permitted to serve, some— including some HBCUs—do not need or want to issue bonds or to insure the bonds that they issue. For example, some bonds that public schools issue do not need insurance because the bonds have the states’ credit ratings; these high ratings reduce or eliminate the benefits of insurance. Some schools, both public and private, are fundamentally strong. They do not have to incur debt to finance their construction or renovation projects, or they can issue bonds without insurance. Yet other schools find the cost to issue bonds or the size of the debt incurred makes using bonds to finance a project impractical. Finally, some schools find alternative sources of financing available. For HBCUs, for example, the Department of Education’s recently implemented HBCU Capital Financing program is such an alternative. Connie Lee and the Department of Education commented on a draft of this report. Connie Lee provided us with information to update and clarify the report, and we incorporated its comments as appropriate. Education did not disagree with our facts. Instead, it chose to state its support for privatizing Connie Lee. It also highlighted programs it administers that provide HBCUs funds for constructing and renovating academic facilities. These programs are noted in our report. (See app. IV.) We will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs, the Senate Committee on Labor and Human Resources, the House Committee on Economic and Educational Opportunities, and the House Committee on Government Reform and Oversight; the Secretary of Education; the President of Connie Lee; and other interested parties. We will make copies available to others on request. If you or your staff have any questions about this report, please call me or Joseph J. Eglin, Jr., Assistant Director, at (202) 512-7014. Major contributors to this report include John T. Carney, Sheila R. Nicholson, and Laurel H. Rabin. (continued) Southern University at Shreveport-Bossier City LA (continued) Connie Lee’s financial record is sound, according to Standard and Poor’s, which has rated Connie Lee’s claims-paying ability “AAA” each year since 1991. In reaffirming Connie Lee’s rating in May 1995, Standard and Poor’s said Connie Lee’s AAA rating is based on Connie Lee’s very strong ratio of capital resources to insurance in force, stable financial performance, pragmatic strategic and business planning, and proven underwriting practices. In Standard and Poor’s May 1995 ranking of the 10 major municipal bond insurers, however, it most often ranked Connie Lee seventh or below in 16 performance categories assessed. But Standard and Poor’s noted that Connie Lee’s rankings, compared with the other companies, reflect that it is (1) a new insurer and, therefore, suffers from the disproportionately high expense base of a start-up company; (2) a small company; and (3) an insurer restricted to the higher-education sector of the bond market. During 1992, the first full year that Connie Lee insured bonds, and continuing into 1993, falling interest rates sparked an increase in bond refinancing that caused the volume of municipal bond insurance and profits to surge throughout the bond insurance industry. However, as interest rates increased in 1994, the overall municipal bond and associated insurance volume dropped, as bond refinancing significantly decreased, causing profits to decline industrywide. Connie Lee’s net premiums written declined 36 percent in 1994, after having increased 148 percent in 1992 and decreased 22 percent in 1993. But net premiums written fell 29 percent for municipal bond insurers as a whole in 1994, after having increased 25 percent in 1993. Despite this, Connie Lee’s key financial indicators have shown positive trends. As reflected in Connie Lee’s audited financial statements, as of December 31, 1994, Connie Lee’s total insurance in force was $6.6 billion, an increase of 25 percent since December 31, 1992. For the year ended December 31, 1994, Connie Lee’s total revenue was $19.7 million, an increase of 8 percent since the end of 1993. Total revenue had shown an increase of 18 percent during 1993. Net income for 1994 was $8 million, an increase of only 9 percent over 1993, although the increase from 1992 to 1993 was 27 percent. Connie Lee’s total assets rose 3 percent between 1993 and 1994, totaling $225 million on December 31, 1994. Total assets had shown an increase of 13 percent during 1993. In 1994, Connie Lee reported its return on equity as 5.6 percent and its return on assets as 3.6 percent. Both indicators rose slightly in 1994, but both are below estimates of the industry averages, as reported by industry analysts, which were 13.7 percent for return on equity and 5.3 percent for return on assets. At the end of 1994, Connie Lee held $1 in capital for every $57 of exposure to loss, that is, every $57 it was obligated to pay in case of defaults, whereas the average for the industry as a whole was $1 of capital for every $134 of exposure to loss. Since its inception, Connie Lee has not incurred any losses from bond defaults. However, in 1994, Connie Lee set up a loss reserve of $1.5 million to cover potential losses due to the default of a hospital it reinsured. According to Standard and Poor’s and other industry analysts, the amount of capital held by bond insurers is so far in excess of minimum levels required to maintain a AAA rating that it represents a problem for the industry—too much idle capital. Because of the slowdown in the bond market and associated bond insurance business, Standard and Poor’s is urging municipal bond insurers to put unutilized capital to use by opening new lines of business instead of lowering premiums or insuring riskier projects. Among the new lines of business that Standard and Poor’s suggests are investment management services and international insurance. But Connie Lee’s authorizing legislation restricts it from expanding into new lines of business. To develop information for this report, we reviewed federal and state laws applicable to the authorization and establishment of Connie Lee, and Connie Lee’s legislative history. We also reviewed literature on Connie Lee and the bond insurance industry. We interviewed officials at Connie Lee to determine its policies and practices for obtaining and approving applications for bond insurance and its suggestions that would enable it to better serve more schools. We also collected data on the colleges, universities, and teaching hospitals that were either approved or rejected—from October 29, 1991, through June 30, 1995—by Connie Lee for bond insurance. We analyzed the data on the schools Connie Lee approved for insurance. In addition, we obtained information and collected data on Connie Lee’s financial record and profitability through December 1994, as reported in Connie Lee’s audited financial statements. We did not assess Connie Lee’s financial condition, but instead relied on Standard and Poor’s credit analysis. Nor did we independently verify the information provided by Connie Lee or others. We interviewed officials in a judgmentally selected sample at HBCUs and other schools that had applied to Connie Lee for bond insurance. We included schools that had obtained insurance from Connie Lee and those that had not. We also interviewed representatives of the bond insurance industry, Standard and Poor’s, and the Department of Education. We did not assess the extent of needed construction and renovation among colleges, universities, and teaching hospitals. We also did not determine the number of schools for which Connie Lee might be an appropriate vehicle for helping to finance facilities’ construction and renovation. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on how the College Loan Insurance Association (Connie Lee) has served the needs of 102 Historically Black Colleges and Universities (HBCU). GAO found that: (1) Connie Lee insured 95 bonds totalling $2.6 billion from October 1991 through September 1995, 90 of which received the lowest investment grade rating; (2) Connie Lee offered to insure 8 HBCU bonds, declined to insure 3 HBCU bonds it considered risky, determined that 13 HBCU were rated above the category of risk they applied for, and was undecided on whether to issue insurance for 1 HBCU bond; (3) Connie Lee is limited to insuring low grade bonds by federal and state laws, as well as by industry practices; (4) some HBCU may finance the construction and renovation of HBCU facilities by issuing bonds without insurance, obtaining bond insurance from companies other than Connie Lee, and using loans or grants from federal and state governments, alumni, and private foundations; and (5) officials suggest removing federal limits and credit ratings on certain types of bonds Connie Lee insures, guaranteeing federal loans to pay for defaulted bonds, and providing Connie Lee with additional loans and grants for capital.
|
Biofuels are an alternative to petroleum-based transportation fuels and derived from renewable resources. Currently, most biofuels are derived from corn and soybeans. Ethanol is the most commonly produced biofuel in the United States, and about 98 percent of it is made from corn that is grown primarily in the Midwest. Corn is converted to ethanol at biorefineries through a fermentation process and requires water inputs and outputs at various stages of the production process—from growth of the feedstock to conversion into ethanol. While ethanol is primarily produced from corn grains, next generation biofuels, such as cellulosic ethanol and algae-based fuels, are being promoted for various reasons including their potential to boost the nation’s energy independence and lessen environmental impacts, including on water. Cellulosic feedstocks include annual or perennial energy crops such as switchgrass, forage sorghum, and miscanthus; agricultural residues such as corn stover (the cobs, stalks, leaves, and husks of corn plants); and forest residues such as forest thinnings or chips from lumber mills. Some small biorefineries have begun to process cellulosic feedstocks on a pilot-scale basis; however, no commercial-scale facilities are currently operating in the United States. In light of the federal renewable fuel standard’s requirements for cellulosic ethanol starting in 2010, DOE is providing $272 million to support the cost of constructing four small biorefineries that will process cellulosic feedstocks. In addition, in recent years, researchers have begun to explore the use of algae as a biofuel feedstock. Algae produce oil that can be extracted and refined into biodiesel and has a potential yield per acre that is estimated to be 10 to 20 times higher than the next closest quality feedstock. Algae can be cultivated in open ponds or in closed systems using large raceways of plastic bags containing water and algae. Thermoelectric power plants use a fuel source—for example, coal, natural gas, nuclear material such as uranium, or the sun— to boil water to produce steam. The steam turns a turbine connected to a generator that produces electricity. Traditionally, water has been withdrawn from a river or other water source to cool the steam back into liquid so it may be reused to produce additional electricity. Most of the water used by a traditional thermoelectric power plant is for this cooling process, but water may also be needed for other purposes in the plant such as for pollution control equipment. In 2000, thermoelectric power plants accounted for 39 percent of total U.S. freshwater withdrawals. EIA annually reports data on the water withdrawals, consumption and discharges of power plants of a certain size, as well as some information on water source and cooling technology type. These data are used by federal agencies and other researchers in estimating the overall power plant water use and determining how this use has and will continue to change. Our work to date indicates that while the water supply and water quality effects of producing corn-based ethanol are fairly well understood, less is known about the effects of the next generation of feedstocks and fuels. The cultivation of corn for ethanol production can require substantial quantities of water—from 7 to 321 gallons per gallon of ethanol produced—depending on where it is grown and how much irrigation water is used. Furthermore, corn is a relatively resource-intensive crop, requiring higher rates of fertilizer and pesticide applications than many other crops; some experts believe that additional corn production for biofuels conversion will lead to an increase in fertilizer and sediment runoff and in the number of impaired streams and other water bodies. Some researchers and conservation officials have told us that the impact of corn-based ethanol on water supply and water quality could be mitigated through research into developing additional drought-tolerant and more nutrient-efficient crop varieties thereby decreasing the amount of water needed for irrigation and the amount of fertilizer that needs to be applied. Furthermore, experts also mentioned the need for additional data on current aquifer water supplies and research on the potential of biofuel cultivation to strain these water sources. In contrast to corn-based ethanol, our work to date indicates that much less is known about the effects that large-scale cultivation of cellulosic feedstocks will have on water supplies and water quality. Since potential cellulosic feedstocks have not been grown commercially to date, there is little information on the cumulative water, nutrient, and pesticide needs of these crops, and it is not yet known what agricultural practices will actually be used to cultivate these feedstocks on a commercial scale. For example, while some experts assume that perennial feedstocks will be rainfed, other experts have pointed out that to achieve maximum yields for cellulosic crops, farmers may need to irrigate these crops. Furthermore, because water supplies vary regionally, additional research is needed to better understand geographical influences on feedstock production. For example, the additional withdrawals in states relying heavily on irrigation for agriculture, such as Nebraska, may place new demands on the Ogallala Aquifer, an already strained resource from which eight states draw water. In addition, if agricultural residues—such as corn stover—are to be used, this could negatively affect soil quality, increase the need for fertilizer, and lead to increased sediment runoff to waterways. Considerable uncertainty exists regarding the maximum amount of residue that can be removed for biofuels production while maintaining soil and water quality. USDA, DOE, and some academic researchers are attempting to develop new projections on how much residue can be removed without compromising soil quality, but sufficient data are not yet available to inform their efforts, and it may take several years to accumulate such data and disseminate it to farmers for implementation. Experts we spoke with generally agree that more research on how to produce cellulosic feedstocks in a sustainable way is needed. Our work also indicates that even less is known about newer biofuels feedstocks such as algae. Algae have the added advantage of being able to use lower-quality water for cultivation, according to experts. However, the impact on water supply and water quality will ultimately depend on which cultivation methods are determined to be the most viable. Therefore, research is needed on how best to cultivate this feedstock in order to maximize its potential as a biofuel feedstock and limit its potential impacts on water resources. Other areas we have identified that relate to water and algae cultivation in need of additional research include: Oil extraction. Additional research is needed on how to extract the oil from the algal cell in such a way as to preserve the water contained in the cell along with the oil, thereby allowing some of that water to be recycled back into the cultivation process. Contaminants. Information is needed on how to manage the contaminants that are found in the algal cultivation water and how any resulting wastewater should be handled. Uncertainty also exists regarding the water supply impacts of converting feedstocks into biofuels. Biorefineries require water for processing the fuel and need to draw from existing water resources. Water consumed in the corn-ethanol conversion process has declined over time with improved equipment and energy efficient design, according to a 2009 Argonne National Laboratory study, and is currently estimated at 3 gallons of water required for each gallon of ethanol produced. However, the primary source of freshwater for most existing corn ethanol plants is from local groundwater aquifers and some of these aquifers are not readily replenished. For the conversion of cellulosic feedstocks, the amount of water consumed is less defined and will depend on the process and on technological advancements that improve the efficiency with which water is used. Current estimates range from 1.9 to 5.9 gallons of water, depending on the technology used. Some experts we spoke with said that greater research is needed on how to manage the full water needs of biorefineries and reduce these needs further. Similar to current and next generation feedstock cultivation, additional research is also needed to better understand the impact of biorefinery withdrawals on aquifers and to consider potential resource strains when siting these facilities. Our work to date also indicates that additional research is needed on the storage and distribution of biofuels. Ethanol is highly corrosive and poses a risk of damage to pipelines, and underground and above-ground storage tanks, which could in turn lead to releases to the environment that may contaminate groundwater, among other issues. These leaks can be the result of biofuel blends being stored in incompatible tank systems—those that have not been certified to handle fuel blends containing more than 10 percent ethanol. While EPA currently has some research under way, additional study is needed into the compatibility of higher fuel blends, such as those containing 15 percent ethanol, with the existing fueling infrastructure. To overcome potential compatibility issues, future research is needed on other conversion technologies that can be used to produce renewable and advanced fuels that are capable of being used in the existing infrastructure. In our work to date, we have found (1) the use of advanced cooling technologies can reduce freshwater use at thermoelectric power plants, but federal data may not fully capture this industry change; (2) the use of alternative water sources can also reduce freshwater use, but federal data may not systematically capture this change; and (3) federal research under way is focused on examining efforts to reduce the use of freshwater in thermoelectric power plants. Advanced cooling technologies offer the promise to reduce freshwater use by thermoelectric power plants. Unlike traditional cooling technologies that use water to cool the steam in power plants, advanced cooling technologies carry out all or part of the cooling process using air. According to power plant developers, they consider using these water- conserving technologies in new plants, particularly in areas with limited available water supplies. While these technologies can significantly reduce the amount of water used in a plant—and in some cases eliminate the use of water for cooling—their use entails a number of challenges. For example, plants using advanced cooling technologies may cost more to build and operate; require more land; and, because these technologies can consume a significant amount of energy themselves, witness lower net electricity output—especially in hot, dry conditions. However, eliminating or minimizing freshwater use by incorporating an advanced cooling technology provides a number of potential benefits to plant developers, including minimizing the costs associated with acquiring, transporting, and treating water, as well as eliminating impacts on the environment associated with water withdrawals, consumption, and discharge. In addition, the use of these advanced cooling technologies may provide the flexibility to build power plants in locations not near a source of water. For these reasons, a number of power plant developers in the United States and across the world have adopted advanced cooling technologies, but according to EIA officials, the agency’s forms have not been designed to collect information on the use of advanced cooling technologies. Moreover, the instruments the agency uses to collect these data were developed many years ago and have not been recently updated. EIA officials have told us that while some plants may choose to report this information, they may not do so consistently or in such a way that allows comprehensive identification of the universe of plants using advanced cooling technologies. Water experts and federal agencies we spoke to during the course of our work identified value in the annual EIA data on cooling technologies, but some explained that not having data on advanced cooling technologies limits public understanding of their prevalence and analysis of the extent to which their adoption results in a significant reduction in freshwater use. According to EIA officials, the agency is currently redesigning the instrument it uses to collect these data and expects to begin using the revised instrument in 2011. In addition, during the course of our work we noted that in 2002, EIA discontinued reporting water-related data for nuclear power plants, including water use and cooling technology. As we develop our final report, we will be looking at various suggestions that we can make to DOE to improve its data collection efforts. Our work to date also indicates that the use of alternative water sources can substantially reduce or eliminate the need to use freshwater for power plant cooling at an individual plant. Alternative water sources that may be usable for power plant cooling include treated effluent from sewage treatment plants; groundwater that is unsuitable for drinking or irrigation because it is high in salts or other impurities; industrial water, such as water generated when extracting minerals like oil, gas, and coal; and others. Use of these alternative water sources can ease the development process where freshwater sources are in short supply and lower the costs associated with obtaining and using freshwater when freshwater is expensive. Because of these advantages, alternative water sources play an increasingly important role in reducing power plant reliance on freshwater, but can pose challenges, including requiring special treatment to avoid adverse effects on cooling equipment, requiring additional efforts to comply with relevant regulations, and limiting the potential locations of power plants to those nearby an alternative water source. These challenges are similar to those faced by power plants that use freshwater, but they may be exacerbated by the lower quality of alternative water sources. Power plant developers we spoke with told us they routinely consider use of alternative water sources when developing their power plant proposals. Moreover, a 2007 report by Argonne National Laboratory indicates that the use of treated municipal wastewater at power plants has become more common, with 38 percent of power plants after 2000 using reclaimed water. EIA collects annual data from power plants on their water use and water source. However, according to EIA officials, while some plants report using an alternative water source, many may not be reporting such information since EIA’s data collection form was not designed to collect data on these freshwater alternatives. One expert we spoke with told us that not having data on the use of alternative water sources at power plants limits public understanding of these trends and the extent to which these approaches are effective in reducing freshwater use. As we develop our final report, we plan to also develop suggestions for DOE that can improve this data gathering process. Power plant developers may choose to reduce their use of freshwater for a number of reasons, such as when freshwater is unavailable or costly to obtain, to comply with regulatory requirements, or to address public concern. However, a developer’s decision to deploy an advanced cooling technology or an alternative water source depends on an evaluation of the tradeoffs between the water savings and other benefits these alternatives offer and the cost involved. For example, where water is unavailable or prohibitively expensive, power plant developers may determine that despite the challenges, advanced cooling technologies or alternative water sources offer the best option for getting a potentially profitable plant built in a specific area. While private developers make key decisions on what types of power plants to build and where to build them, and how to cool them based on their views of the costs and benefits of various alternatives, government research and development can be a tool to further the use of alternative cooling technologies and alternative water supplies. In this regard, the Department of Energy’s National Energy Technology Laboratory (NETL) plays a central role in DOE’s research and development effort. In recent years, NETL has funded research and development projects through its Innovations for Existing Plants program aimed at minimizing the challenges of deploying advanced cooling technologies and using alternative water sources at existing plants, among other things. In 2008, DOE awarded about $9 million to support research and development of projects that, among other things, could improve the performance of advanced cooling technologies, recover water used to reduce emissions of air pollutants at coal plants for reuse, and facilitate the use of alternative water sources such as polluted water for cooling. Such research endeavors, if successful, could alter the trade-off analysis power plant developers conduct in favor of nontraditional alternatives to cooling. Ensuring sufficient supplies of energy and water will be essential to meeting the demands of the 21st century. This task will be particularly difficult, given the interdependency between energy production and water supply and water quality and the strains that both these resources currently face. DOE, together with other federal agencies, has a key role to play in providing key information, helping to identify ways to improve the productivity of both energy and water, partnering with industry to develop technologies that can lower costs, and analyzing what progress has been made along the way. While we recognize that DOE currently has a number of ongoing research efforts to develop information and technologies that will address various aspects of the energy-water nexus, our work indicates that there are a number of areas to focus future research and development efforts. Investments in these areas will provide information to help ensure that we are balancing energy independence and security with effective management of our freshwater resources. Mr. Chairman that concludes my prepared statement, I would be happy to respond to any questions that you or other Members of the Subcommittee might have. For further information on this testimony, please contact me at 202-512- 3841 or [email protected]. Key staff contributors to this testimony were Jon Ludwigson, Assistant Director; Elizabeth Erdmann, Assistant Director; Scott Clayton; Paige Gilbreath; Miriam Hill; Randy Jones; Micah McMillan; Nicole Rishel; Swati Thomas; Lisa Vojta; and Rebecca Wilson. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Water and energy are inexorably linked--energy is needed to pump, treat, and transport water and large quantities of water are needed to support the development of energy. However, both water and energy may face serious constraints as demand for these vital resources continues to rise. Two examples that demonstrate the link between water and energy are the cultivation and conversion of feedstocks, such as corn, switchgrass, and algae, into biofuels; and the production of electricity by thermoelectric power plants, which rely on large quantities of water for cooling during electricity generation. At the request of this committee, GAO has undertaken three ongoing studies focusing on the water-energy nexus related to (1) biofuels and water, (2) thermoelectric power plants and water, and (3) oil shale and water. For this testimony, GAO is providing key themes that have emerged from its work to date on the research and development and data needs with regard to the production of biofuels and electricity and their linkage with water. GAO's work on oil shale is in its preliminary stages and further information will be available on this aspect of the energy-water nexus later this year. While the effects of producing corn-based ethanol on water supply and water quality are fairly well understood, less is known about the effects of the next generation of biofuel feedstocks. Corn cultivation for ethanol production can require from 7 to 321 gallons of water per gallon of ethanol produced, depending on where it is grown and how much irrigation is needed. Corn is also a relatively resource-intensive crop, requiring higher rates of fertilizer and pesticides than many other crops. In contrast, little is known about the effects of large-scale cultivation of next generation feedstocks, such as cellulosic crops. Since these feedstocks have not been grown commercially to date, there are little data on the cumulative water, nutrient, and pesticide needs of these crops and on the amount of these crops that could be harvested as a biofuel feedstock without compromising soil and water quality. Uncertainty also exists regarding the water supply impacts of converting cellulosic feedstocks into biofuels. While water usage in the corn-based ethanol conversion process has been declining and is currently estimated at 3 gallons of water per gallon of ethanol, the amount of water consumed in the conversion of cellulosic feedstocks is less defined and will depend on the process and on technological advancements that improve the efficiency with which water is used. Finally, additional research is needed on the storage and distribution of biofuels. For example, to overcome incompatibility issues between the ethanol and the current fueling and distribution infrastructure, research is needed on conversion technologies that can be used to produce renewable fuels capable of being used in the existing infrastructure. With regard to power plants, GAO has found that key efforts to reduce use of freshwater at power plants are under way but may not be fully captured in existing federal data. In particular, advanced cooling technologies that use air, not water, for cooling the plant, can sharply reduce or even eliminate the use of freshwater, thereby reducing the costs associated with procuring water. However, plants using these technologies may cost more to build and witness lower net electricity output--especially in hot, dry conditions. Nevertheless, a number of power plant developers in the United States have adopted advanced cooling technologies, but current federal data collection efforts may not fully document this emerging trend. Similarly, plants can use alternative water supplies such as treated waste water from municipal sewage plants to sharply reduce their use of freshwater. Use of these alternative water sources can also lower the costs associated with obtaining and using freshwater when freshwater is expensive, but pose other challenges, including requiring special treatment to avoid adverse effects on cooling equipment. Alternative water sources play an increasingly important role in reducing power plant reliance on freshwater, but federal data collection efforts do not systematically collect data on the use of these water sources by power plants. To help improve the use of alternatives to freshwater, in 2008, the Department of Energy awarded about $9 million to examine among other things, improving the performance of advanced cooling technologies. Such research is needed to help identify cost effective alternatives to traditional cooling technologies.
|
The Bureau puts forth tremendous effort to conduct a complete and accurate count of the nation’s population; nonetheless, some degree of coverage error is inevitable because of the inherent complexity of counting the nation’s large and diverse population and limitations in census-taking methods. These census coverage errors can take a variety of forms, including a person missed (an undercount), a person counted more than once (an overcount), or a person who should not have been counted, such as a child born after Census Day (another type of overcount). To further understand and to inform users about the quality of the census, the Bureau has been evaluating coverage measurement for more than 50 years. While initial evaluations relied solely on demographic analysis— population estimates based on birth and death rates as well as immigration estimates—modern coverage measurement began with the 1980 Census when the Bureau began also comparing census counts to survey results from an independent coverage measurement sample of the population. Using statistical methods, the Bureau generated detailed measures of the differences among undercounts of particular ethnic, racial, and other groups, which have been referred to as “differential undercounts.” These measures were also generated for the 1990 and 2000 censuses. Although the Bureau considered doing so in earlier decades, it has never used its estimates of coverage error to adjust census data. In 1980, the Director of the Census Bureau decided that potential adjustments would be flawed due to missing and inaccurate data. In 1990, the Bureau recommended statistically adjusting census data; however, the Secretary of Commerce determined that the evidence to support an adjustment was inconclusive and decided not to adjust. For the 2000 Census, a 1999 Supreme Court ruling held that the Census Act prohibited the use of statistical sampling to generate population data for apportioning the House of Representatives. The Bureau had planned to produce apportionment numbers using traditional census-taking methods, and provide statistically adjusted numbers for non-apportionment uses of the data such as congressional redistricting and allocating federal funds. The Bureau later determined that its statistical estimates did not provide a reliable measure of census accuracy and could not be used to adjust the non-apportionment census data. The Bureau is not planning to use CCM to adjust the 2010 Census. Instead, CCM will be used to evaluate coverage error to improve the 2020 and future censuses, and will focus on estimating various components of census coverage in addition to net coverage errors—the net effect on coverage after undercounts and overcounts are considered. These components of coverage include correct enumerations, erroneous enumerations (people or housing units that were counted but should not have been), and omissions (people or housing units that were not counted but should have been). The Bureau also plans to include imputations (counts of people and their characteristics that are provided for nonresponding households, usually based on responses from others under similar circumstances, such as from surrounding households). Statistical measurements of census coverage are obtained by comparing and matching the housing units and people counted by the independent coverage measurement sample to those counted by the census in and around the sample areas. The Bureau has developed separate address lists—one for the entire nation of over 134 million housing units that it will use to conduct the census and one for coverage measurement sample areas—and will collect each set of data through independent operations. For the 2010 Census, census operations began collecting population data from households in January 2010 and will continue through the end of July, while CCM operations will collect data by visiting each of the housing units in the coverage measurement sample during an operation called Person Interviewing from August through October. The statistical methodology the Bureau uses to estimate net coverage errors relies on an assumption that the chance that a person is counted by the census is not affected by whether he or she is counted in the independent coverage measurement sample, or vice versa. Because violating this “independence” assumption can bias coverage estimates, the Bureau takes special measures to maintain CCM’s separation from the census, such as developing a separate address list for the coverage measurement sample discussed above. Since our April 2008 report, the Bureau has finalized its plans in key areas of the CCM program including CCM’s goals, the timing of operations, and the timing and types of results to be produced. Planning continues in other areas, such as developing estimation methods, evaluating the CCM program, and implementing its Master Trace Project. Continued progress and adherence to schedule will be important to ensure that the Bureau carries out CCM in order to meet its goal of improving the 2020 Census. For example, in our 2008 report, we recommended that the Bureau provide decision points and plans for evaluating CCM. In September 2009, the Bureau finalized its list of 22 planned evaluations for the 2010 Census, which included five that address specific methodological or procedural topics within the CCM program. However, all study plans are not due to be completed until April 2010. In addition, while the deadlines for finalizing CCM estimation methods have not yet passed, the Bureau has many of its default plans already in place. Default plans allow the Bureau to move forward on schedule even if new plans have not been developed. Table 1 shows the status of the Bureau’s plans for the design of CCM in each of these areas. In September 2009, shortly after taking office, the Director of the Census Bureau asked the staff responsible for CCM to review its CCM design and propose specific changes that would reduce the introduction of nonsampling error—such as human errors made when recording data during interviews— into CCM and its resulting estimates. The staff proposed numerous changes intended to reduce error in collected data. They also proposed an additional research study. The Director approved all of these proposals in mid-December 2009. Key changes included: increasing the reinterview rates for CCM field work to improve quality assurance; increasing training time for short-term workers hired to conduct door-to- door visits during the Person Interviewing operation to improve interview techniques for local or other special situations due to current economic conditions (such as people who became homeless or have had to move frequently during the housing crisis); increasing supervisor-to-employee field staffing ratios to improve quality and monitoring of field work at each level; and adding a telephone-based study to collect information about how well respondents recall information about their residence and possible movement since Census Day. In addition, the decision authorized a nearly 45 percent reduction in the CCM sample size that the Bureau believes would generate the cost savings to pay for the other changes. Our understanding of the issues suggests that these are reasonable efforts to improve survey quality. The Bureau’s reduction in sample size will reduce precision of the estimates, yet the proposed changes should reduce nonsampling errors and thus provide users with more reliable estimates. For example, the Bureau expects short-term CCM workers to make fewer mistakes in identifying temporary or unconventional housing units when they have received additional training specific to their local circumstances, such as in areas with large numbers of seasonal or displaced workers. The Bureau’s actions to finalize some areas of CCM program planning are important steps in the right direction. Still, in some cases, it will be important for the Bureau to take additional actions to help ensure the results of CCM are as useful as they could be to inform Bureau decisions on improving future censuses and coverage measurement efforts. For example, the Bureau could better document how CCM results will be used as part of the planning process for the 2020 Census. Indeed, the Bureau has already begun laying the foundation for its 2020 planning efforts. These early planning efforts are described in a series of decision memorandums issued in the summer of 2009, and include milestones leading up to a census test in April 2014, descriptions of planning phases, and a list of the various organizational components that conduct the census. Although these planning documents explicitly state the importance of relying on the 2010 Census Evaluation and Testing program—an ongoing assessment effort separate from CCM that, like CCM, is designed to improve future operations—the Bureau has not yet taken similar steps to integrate the CCM program with 2020 planning. In addition, the Bureau does not have specific plans in its CCM program goals to do anything beyond producing CCM results. Bureau officials have maintained that until it produces CCM results, it is difficult to determine how to use CCM data to improve the design of future decennials. While we agree with the Bureau that the results will determine the specifics of any potential design improvements, it is not premature to consider how the earliest results from CCM—scheduled for early 2012— could help inform early planning and decisions. Importantly, by creating a “roadmap” that describes, for example, what the Bureau might learn from CCM or how the results might feed into early 2020 Census planning, the Bureau will better ensure that there are no gaps or overlaps in the use of CCM in early 2020 planning. The Bureau’s Master Trace Project is another area where additional efforts are needed to ensure useful CCM results. The Bureau initiated the Master Trace Project in September 2009, to facilitate the use of census and CCM data for future research. Currently, Bureau data are collected and archived in different types of datasets and systems. The Master Trace Project is intended to ensure that these datasets and systems can be used together, or linked, to support detailed research into the causes of census coverage problems and facilitate research on the possible interactions of future operations. For example, a researcher might want to see if there is a relationship between the Bureau’s employment practices and the magnitude of an undercount in a particular area. In so doing, the researcher may want to compare census payroll, overtime, and other human capital data to the data from that region collected and processed by census and CCM. Such datasets would not ordinarily be linked during the census. The Bureau has not yet taken the steps needed to ensure that such research across different data systems would be possible. The Bureau held a meeting in December 2009 with staff responsible for many major decennial systems and obtained agreement about the importance of data retention for this project; however, the Bureau has not yet resolved how it would make the project happen. In particular, the Bureau has not yet completed an inventory of the census databases that might be of potential interest for future research, identified which archived versions might be most useful, or mapped out how they might be archived and linked. Until this is done, it is unclear that Bureau or other researchers will have access to census operational data that they need to fully analyze the census coverage errors that CCM may uncover. Moving forward, it will be important for the Bureau to perform the initial assessment of its data systems, identify gaps in data collection, and identify any other related steps to ensure that key data can be linked. Doing this quickly will also be important as Census 2010 is underway and it could become increasingly difficult to make changes to database structures or archival and data storage plans if the Bureau’s assessments determine that changes are necessary. A third area where the Bureau needs to do additional work is in assessing how the timing of CCM data collection might adversely affect CCM findings. When planning CCM, the Bureau faced the challenge of determining the optimal time to launch the CCM data collection operation, known as Person Interviewing (PI). If the Bureau starts PI too early, it increases the chance that it overlaps with census data collection, possibly compromising the independence of the two different operations and introducing a “contamination bias” error into CCM data. If the Bureau starts PI too late, it increases the chance that respondents will not accurately remember household information from Census Day, April 1, introducing error (known as “recall bias”) in the CCM count. Both types of errors—contamination bias and recall bias—could affect the Bureau’s conclusions about the accuracy of the census. An understanding of the trade-offs between these two types of biases would be important in future decisions regarding the optimal timing of PI. In early 2009, based on concerns by the National Academy of Sciences (NAS) and other stakeholders about the relative lateness in the start date of PI and its possible impact on the quality of CCM findings, the Bureau considered whether to start PI 6 weeks earlier than planned. In June 2009, the Bureau decided to keep the originally scheduled start on August 14, 2010. Bureau memorandums and officials justified the decision largely because of concern that it was too late in the planning process to make a change in the complex CCM schedule. The memorandums cited gaps in knowledge about the impact of timing on recall bias, presented research with differing conclusions about the extent of contamination in prior census tests, and justified the recommendation to not change the start date by the operational challenges faced to make the change. Bureau officials have also explained that the goal of using coverage measurement in 2000 to possibly adjust the census-created time pressures in 2000 that forced an early PI, and because such time pressures do not exist for PI in 2010, it is scheduled to begin more than 4 months after Census Day. By comparison, during the 2000 Census, the Bureau launched PI in April 2000 and had completed about 99 percent of its data collection by the end of the first week of August 2000, a week earlier than the scheduled 2010 PI start date. An extensive 2000 Census evaluation found no evidence of contamination bias caused by the earlier start of PI in 2000. Related Bureau research since then has also found no significant evidence of contamination bias during census tests, although one test found that census results could be affected. Yet Bureau officials remained concerned about the possibility, since the CCM questions are similar to follow-up questions used in one of the 2010 census follow-up operations. Furthermore, parts of this census operation are new in 2010, and end later than similar operations did in 2000. Moving forward, additional research on the trade-offs between recall bias and contamination errors could help the Bureau more fully understand the implications of choosing various start times for PI on the resulting estimates of coverage error and better determine the optimal timing of PI in future censuses. Currently, the Bureau has a telephone-based study planned in order to measure recall errors, which could provide additional information about when recall errors are more likely to occur. However, this study is limited to certain types of recall error, and the Bureau does not have an evaluation planned to measure possible contamination between the new, much later, parts of census follow-up and CCM data collection or to assess the trade-offs between the biases from starting earlier compared to starting later. Such additional study after the 2010 Census could provide the Bureau better information about the trade-offs in data quality from potential contamination and recall biases and provide a better basis for determining the optimal scheduling of coverage measurement operations. Assessing the accuracy of the census is an essential step in improving current and future censuses. The Bureau has made progress on designing and planning for its CCM program and continues activity to complete the plan. Additional actions in three CCM planning areas may further improve CCM or its usefulness to the 2020 Census. Specifically, the Bureau has stated the importance of using 2010 evaluation data such as CCM’s for 2020 Census design, but has not yet taken steps to link CCM data to an improved 2020 design. If the Bureau is to best achieve its goal of using CCM to help improve the 2020 Census, it will need to integrate planning for any follow-up work on CCM results or data with the other early planning already underway for Census 2020. Second, the Bureau has many different processes that come together in the conduct of a decennial census, and archived data on those processes could provide useful information to researchers trying to figure out what worked well and what did not. The Master Trace Project can help researchers link CCM results and data to potential design changes for Census 2020. Determining which data need to be linked or archived to enable future linkage within the project can help prevent gaps in 2010 data that might hinder the project’s ability to help identify improvements for the 2020 Census. Third, the timing of CCM’s primary data collection operation—Person Interviewing—involves trade-offs between reducing contamination bias and reducing recall error that the Bureau did not have conclusive information on. Since 2010 Person Interviewing is starting 1 week after a similar operation ended in 2000, the chance of introducing recall bias errors into CCM data is higher in 2010 than it was in 2000. Although the Bureau has a study planned to measure some recall errors, there is no study planned to measure contamination between the new parts of census follow-up—which use questions similar to those asked by CCM and finish much later than follow-up did in 2000—and CCM or to assess the trade- offs between the two types of biases in timing decisions. Targeted research after the 2010 Census on the relationship between the timing of data collection and the trade-offs between these types of errors before the 2020 Census and its coverage measurement efforts could help the Bureau better determine the optimal timing of future data collection operations. We recommend that the Secretary of Commerce require the Director of the U.S. Census Bureau to take the following three actions to improve the usefulness of CCM for 2020: To help the Bureau achieve its goal of using CCM to improve the 2020 Census, better document links between the 2010 CCM program and 2020 Census planning, integrating the goal of using the CCM program to improve Census 2020, such as with CCM results and data, into those broader plans for 2020. To ensure that Bureau datasets from the 2010 Census can be used with other Bureau datasets to support research that could improve the census and CCM, complete the Master Trace Project’s assessment of how key census and CCM data systems are, or can be, linked to each other; identify any potential data gaps; and identify other related steps for future action. To help the Bureau better determine the optimal timing of future coverage measurement data collection, fully assess the trade-offs between starting the data collection earlier, with the possibility of introducing contamination errors, and starting later, with the possibility of introducing recall errors. The Secretary of Commerce provided written comments on a draft of this report on April 5, 2010. The comments are reprinted in appendix I. Commerce generally agreed with the overall findings and recommendations and appreciated our efforts in helping the Census Bureau develop a successful evaluation plan for the 2020 Census. Commerce also provided additional information and comments on certain statements and conclusions in the report. With respect to our second recommendation to complete the Master Trace Project’s assessment of linking key census and CCM data systems, to identify any potential data gaps, and to identify other related steps for future action, Commerce maintained that it would be taking action to preserve adequate documentation and maximize the amount of data retained from each major decennial system. We commend the Bureau for committing to these steps and encourage its follow-through on them and its identification of remaining data gaps and additional steps needed. With respect to our third recommendation to fully assess the trade-offs between two types of error related to starting CCM data collection either earlier or later relative to Census Day, Commerce responded that (1) it is too late to create a new study for 2010 Census; (2) it considers a Bureau contamination study from 2000 to be definitive; and (3) it has recently developed a study on recall bias to try to measure some of the effects of scheduling CCM data collection at various periods of time following the census enumeration. We agree that it is too late to attempt any additional unplanned data collection during the 2010 Census, and we revised our discussion to clarify our intent that the recommended research be conducted after the 2010 Census. We also recognize the thoroughness of the 2000 contamination study the Bureau cites, commend the Bureau on undertaking additional study of recall bias, and look forward to reviewing its study plans when they are available. However, we recommended research comparing trade-offs between the two types of errors at a variety of start dates for CCM data collection—something the 2000 study did not discuss and something it is unclear that a study of only recall bias will achieve. Furthermore as we discussed in our draft report, the Bureau expressed concerns over possible contamination between CCM and new parts of census follow-up in 2010—parts that were introduced after the 2000 study and that were not included in the scope of the 2000 study. We clarified our discussion of this in the report to better focus on the need for research that relates the trade-offs between the two types of error at different timing of data collection. Commerce provided additional information that in response to advice from various advisory panels and after additional research, it would soon make public its proposed geographic levels for CCM estimates. We reflected this decision in table 1 of our report. Finally, Commerce provided additional information about its plans to produce highly technical documentation of the results of CCM estimation including modeling, missing data, and errors in the estimates in a series of memorandums as it did for Census 2000. We reflected this decision in table 1 of this report. We are sending copies of this report to the Secretary of Commerce, the Director of the U.S. Census Bureau, and interested congressional committees. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512- 2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Jeff Dawson, Dewi Djunaidy, Ron Fecso (Chief Statistician), Andrea Levine, Ty Mitchell, Melanie Papasian, and Tamara F. Stenzel.
|
Assessing the accuracy of the census is essential given that census data are used to apportion seats in Congress, to redraw congressional districts, and for many other public and private purposes. The U.S. Census Bureau's (Bureau) Census Coverage Measurement program (CCM) is to assess the accuracy of the 2010 Census and improve the design of operations for the 2020 Census. In April 2008, GAO recommended that the Bureau identify how it would relate CCM results--where the 2010 Census was accurate and inaccurate--to census operations to improve future censuses. Knowing where the 2010 Census was inaccurate can help inform research to improve the 2020 Census. GAO was asked to examine (1) the status of CCM planning and (2) the effects of design decisions since GAO issued its April 2008 report. GAO reviewed Bureau documents related to CCM design and National Academy of Sciences reports, and interviewed responsible Bureau officials. Since GAO's April 2008 report, the Bureau has finalized plans for 2010 CCM goals, the timing of operations, and the types of results to be produced. Planning continues in other areas, such as developing estimation methods, evaluating the CCM program, and implementing its Master Trace Project, which would enable the Bureau to link its datasets and systems to support a broad range of research. The deadlines for some of these plans have not yet passed, but the Bureau already has default plans in place in case further changes do not occur. In mid-December, the Director decided to make some additional changes to the CCM program to improve the quality of CCM results. GAO found that additional actions on Bureau decisions may make CCM more useful in informing Bureau decisions on future census and coverage measurement efforts: (1) The Bureau's 2020 planning efforts are described in a series of decision memoranda issued in the summer of 2009. However, the Bureau has not yet taken steps to integrate CCM results with early 2020 planning to prepare for a census test in 2014. By describing, for example, what the Bureau might learn from CCM or how the results might feed into 2020 Census planning, the Bureau will better ensure that there are no gaps or overlaps in the use of CCM for early 2020 planning. (2) In September 2009, the Bureau began its Master Trace Project, which is intended to ensure that its datasets and systems can be used together to support detailed research into the causes of census coverage problems and facilitate research on the possible interactions of future operations. At the time of this review, the Bureau had not yet completed an inventory of the census databases that might be of potential interest for future research, identified which archived versions might be most useful, or mapped out how they might be archived and linked. Doing this quickly will be important as the census is already underway and it will be difficult to make changes to database structures or archival and data storage plans if the Bureau's assessments determine that changes are necessary. (3) The Bureau reviewed its previous decision to start CCM's Person Interviewing operation later than it did in 2000, and decided in June 2009 not to change it. However, the Bureau does not have a plan to assess the trade-offs in error between earlier and later start dates. Additional research on the trade-offs of different start dates could help the Bureau more fully understand the implications of CCM timing decisions on the resulting estimates of coverage error and better determine the optimal timing of Person Interviewing in future censuses.
|
According to its 2010 Nuclear Energy Research and Development Roadmap: A Report to Congress, NE’s primary mission is to advance nuclear power as a resource capable of meeting the nation’s energy supply, environmental, and energy security needs by resolving technical, cost, safety, proliferation resistance, and security barriers through research, development, and demonstration, as appropriate. NE conducts research aimed at (1) improving the reliability, sustaining the safety, and extending the operational lifetime of existing light water nuclear reactors; (2) supporting the development of the next generation of nuclear reactors, including light-water-reactor-based small modular reactors and advanced reactors, with a focus on affordability; (3) developing sustainable nuclear fuel cycles; and (4) reducing the risk of nuclear proliferation and terrorism. Light-water-reactor-based small modular reactors are smaller in size and energy output than conventional light water reactors—but use the same basic technology for the nuclear reactor core—and offer several potential advantages over existing light water reactors, including lower capital and construction costs through factory fabrication; enhanced safety and security; improved operation times and longer life cycles; and flexibility to be sited at locations that cannot support large nuclear plants, such as isolated areas or sites with limited water supplies. Advanced reactors, including advanced small modular reactors, use innovative nuclear fuels, coolants, and energy systems, and offer the potential for significant advantages over existing light water reactors, including greater energy conversion efficiency, reduced plant size, lower construction and operation costs, and improved safety. NE has mainly conducted advanced reactor R&D on high-temperature reactors and fast reactors. High-temperature reactors produce electricity, as well as process heat that can be used for industrial purposes, such as refining petroleum or producing hydrogen, and replace current sources of process heat from burning natural gas or other fossil fuels, which emit greenhouse gases. Fast reactors can use spent nuclear fuel as their fuel source, which reduces the need for long-term storage of spent nuclear fuel, and would more efficiently use uranium, helping reduce nuclear waste. NE conducts nuclear reactor R&D through its Reactor Concepts Research, Development, and Demonstration program, which aims to (1) help advance nuclear power as a resource capable of meeting the nation’s energy, environmental, and national security needs and (2) develop new and advanced reactor designs and technologies that advance the state of reactor technology and improve the economic competitiveness of nuclear power. The program encompasses the following subprograms: The Light Water Reactor Sustainability subprogram is developing the scientific basis to extend existing nuclear power plant operating life beyond the current licensing period and ensure their long-term reliability, productivity, safety, and security. This subprogram conducts research into materials aging and degradation, updating instrumentation and controls, and assessing reactor safety margins, among other things. The Advanced Reactor Concepts subprogramdevelopment of innovative reactor technologies that may offer improved safety, functionality and affordability; more efficient energy conversion; increased proliferation resistance and security; and that build upon existing nuclear technology and operating experience. It supports research to reduce technical barriers for advanced nuclear energy systems, and its efforts support various reactor technologies at different maturity levels. The Advanced Small Modular Reactor subprogram development of innovative small modular reactor designs that potentially offer improved safety, functionality, and affordability; more efficient energy conversion; increased proliferation resistance and security; and simplified operation and maintenance. For example, the program supports research into novel sensors and control systems for multiple reactor units, as well as advanced materials development. Beginning in fiscal year 2015, the Advanced Small Modular Reactor subprogram will be consolidated into the Advanced Reactor Technologies subprogram. coordinates R&D on common issues and challenges that confront other NE R&D efforts to avoid duplication of effort. NE conducts most of this research at 10 DOE national laboratories across the country and, over the past 3 years, has spent an average of about $840 million per year on its mission to advance nuclear power (see table 1). Of this amount, NE provides about $50 million annually to engage U.S. universities through its Nuclear Energy University Program to fund R&D and to build infrastructure and capabilities to enhance universities’ ability to perform research and educate students. NE’s recent advanced reactor R&D efforts began in 2000, when NE convened a group of senior governmental officials from nine countries to discuss the development of such reactors in the United States and internationally. This group, called the Generation-IV International Forum, and the Nuclear Energy Advisory Committee produced a Technology Roadmap for Generation-IV Nuclear Energy Systems in 2002. The intent of the forum was to develop competitively priced and reliable nuclear reactors, while satisfactorily addressing nuclear safety, waste, and proliferation concerns. In response to these efforts, the United States determined that it would fund the development of a high- temperature gas-cooled reactor as its top priority. In addition, NE funded research into sodium-cooled fast reactors—fast reactors that use sodium to cool the reactor core—as well as a variety of other advanced reactors. The Energy Policy Act of 2005 (EPAct 2005) established in law the Next Generation Nuclear Plant (NGNP) Project.to develop a prototype reactor using advanced technology to generate electricity, hydrogen, or both. The law states that the NGNP Project shall consist of research, development, design, construction, and operation of a nuclear reactor prototype, and specifies completion dates for the project’s two phases, as well as certain other requirements, including licensing. Specifically, EPAct 2005 states that Phase 1 of the project, which requires DOE to, among other things, conduct R&D activities enabling it to select and validate an appropriate technology, culminating in the selection of a technology and initial design parameters, by a target date of September 30, 2011. The law also authorized DOE to submit a report to Congress identifying an alternative date upon which the agency would select the technology and initial design parameters. Phase 2 of the project, in which DOE would develop a final design for a nuclear reactor prototype, apply for licenses to construct and operate the reactor technology, construct the prototype, and begin operations, is to be completed by a target date of September 30, 2021, although DOE is again authorized to submit a report establishing an alternate date for completion. EPAct 2005 also mandated the organization of a consortium of appropriate industrial partners that will carry out cost-shared R&D, as well as design, construction, and operation on behalf of the NGNP Project, and that the NGNP prototype reactor be located at the Idaho National Laboratory in Idaho Falls, Idaho. The purpose of the NGNP is In February 2006, the Nuclear Energy Advisory Committee recommended accelerating the NGNP Project schedule to, among other things, make the project more attractive to industry. However, also in February 2006, the administration announced the newly formed Global Nuclear Energy Partnership program, which sought to encourage the expansion of nuclear energy while addressing the burden of spent fuel disposal and the risk of nuclear weapons proliferation and, according to a 2008 National Academy of Sciences report, led to reduced funding for the NGNP Project. Under the Global Nuclear Energy Partnership program, DOE focused advanced reactor R&D activities on developing sodium-cooled fast reactors, and it changed its approach from designing and building a small engineering-scale demonstration of the reactor and reprocessing facility, led by DOE’s national laboratories, to accelerating its work with industry to demonstrate commercially viable sodium-cooled fast reactor technology in full-scale facilities. In 2008, we reviewed NE’s Global Nuclear Energy Partnership program and found that DOE’s original approach to the domestic component of the program—building engineering-scale facilities—would meet the program’s objectives if the advanced spent nuclear fuel recycling technologies on which it focused could be successfully developed and commercialized. However, we also reported that the approach lacked industry participation—potentially reducing the prospects for eventual commercialization of the technologies. NE favored an accelerated approach of building full-scale facilities that would likely require using unproven evolutions of existing technologies that would reduce the long- term benefits of the sodium-cooled fast reactor. We recommended that NE reassess its preference for an accelerated approach. In response, DOE decided, in 2009, to no longer pursue the Global Nuclear Energy Partnership program. However, NE continued research related to sodium- cooled fast reactors, with a new focus on long-term R&D coordinated with NE’s fuel cycle research. Also in 2008, the National Academy of Sciences’ National Research Council issued a report reviewing NE’s R&D efforts and concluded that the success of any particular advanced reactor technology in the United States would depend on policy decisions and other factors beyond NE’s control. In addition, the report concluded that NE’s resources were barely adequate for basic studies related to the NGNP Project and entirely inadequate for (1) exploring the sodium-cooled fast reactor at a research level and (2) investigating other reactor technologies. The report also stated that selecting a specific technology to develop from among the options known at that time would have been premature. In April 2010, NE issued its Nuclear Energy Research and Development Roadmap: A Report to Congress, which provides a basis to guide NE’s internal programmatic and strategic planning for research going forward. Also in April 2010, NE issued Next Generation Nuclear Plant: A Report to Congress, which presents the historical background of the project; details the project’s spending; and discusses the principal investments in design, licensing, and research. As discussed in this report, NE selected the high- temperature gas-cooled reactor as the advanced reactor technology to develop under the NGNP Project. In its 2011 Phase 1 review of the project, the Nuclear Energy Advisory Committee reported that the NGNP Project was not ready to proceed to the complete set of Phase 2 activities, citing absence of industrial partners willing to commit to share in the cost of the need for more detailed design and R&D; the need to resolve key licensing issues; constraints imposed by EPAct 2005, such as the Idaho site requirement; the absence of a public-private partnership, as required; constructing the prototype reactor; and an unrealistic project plan given the limited amount of conceptual design work completed. The committee recommended that NE continue conducting its Phase 1 R&D, focused on one technology and completing design work, initiate the partnership with the nuclear industry, and continue to engage the Nuclear Regulatory Commission (NRC)—the federal agency that licenses and regulates the nation’s civilian use of nuclear reactors—to ensure that the regulatory framework for this new reactor technology would be ready to support commercialization. The committee also recommended eliminating the requirement that the NGNP prototype be sited at Idaho National Laboratory. In October 2011, DOE submitted a letter to Congress in response to EPAct 2005 requirements to transmit the committee’s report and to report on certain requirements to complete Phase 1 of the NGNP Project. As noted above, EPAct 2005 required, by September 30, 2011, that DOE select technology and initial design parameters; alternately, DOE was to submit a report to Congress identifying a new date upon which the agency would do so. In the 2011 letter, DOE stated that it was notifying Congress that the department had not selected the initial design parameters for the NGNP by September 30, 2011, and that it would not proceed with Phase 2 design activities at that time. The letter also stated that DOE would focus on remaining applied R&D, work with NRC on licensing framework, and establish the public-private partnership—in essence, to follow most of the advisory committee’s recommendations— until conditions favorable to completing the NGNP Project warranted a change in direction. Further, DOE asserted that the partnership—rather than DOE—would select initial design parameters and would provide an update to the project’s schedule and milestones. In its fiscal year 2014 budget request to Congress, DOE indicated that NE would continue to fund some NGNP research activities under the Advanced Reactor Concepts subprogram. NE’s approach to its advanced reactor R&D is to support research on technologies associated with three main types of advanced reactors: high-temperature gas-cooled reactors; liquid-metal-cooled fast reactors, including the sodium-cooled fast reactor; and fluoride-salt-cooled high- temperature reactors. NE also conducts research supporting other less- developed advanced reactor technologies and supports the development of advanced small modular reactors. NE’s approach to advanced reactor R&D addresses broad programmatic goals—including improvements in the economics, safety, and proliferation resistance of nuclear power plants—and aims to develop technologies to reduce nuclear waste and greenhouse gas emissions. This approach provides several advantages, including flexibility in responding to changes in future U.S. energy policy or other circumstances. NE’s approach to its advanced reactor R&D efforts primarily is to support research on three main advanced reactor technologies: high-temperature gas-cooled reactors; liquid-metal cooled fast reactors, including the sodium-cooled fast reactor; and fluoride-salt-cooled high-temperature reactors. On a smaller scale, NE also conducts or funds research supporting other less-developed advanced reactor technologies. In addition, NE conducts research supporting a variety of technologies related to the development of advanced small modular reactors. In discussions with representatives from the nuclear industry, members of the National Academy of Sciences’ National Research Council, and others, we found that views frequently varied on which specific technologies NE should be supporting through its R&D efforts. High-temperature gas-cooled reactors produce energy in the form of high- temperature heat—which can produce electricity or be used as process heat—and differ from existing light water reactors in three key features: using (1) helium gas instead of water as a coolant; (2) graphite instead of water to slow neutrons and sustain the nuclear reaction; and (3) advanced nuclear fuel, which offers safety benefits at high temperatures. These three features make high-temperature gas-cooled reactors capable of operating at higher temperatures than existing light water reactors, thus offering a broader range of applications to industrial processes, as well as higher heat-to-electricity energy conversion efficiencies than are achievable with the lower operating temperatures of light water reactors. The high-temperature gas-cooled reactor is an advanced reactor technology that is expected to be helpful in limiting greenhouse gas emissions, according to NE officials, DOE laboratory staff, and NE documents. In addition, the technology offers inherent and passive safety features—including advanced fuel, helium coolant, and passive heat removal—that are especially important in a post-Fukushima world, according to NE officials. NE’s current involvement with high-temperature gas-cooled reactor R&D began in 2002, when, under the Generation-IV International Forum, high- temperature gas-cooled reactor systems were selected as one of six advanced reactor technologies to be developed by the international consortium. The United States was one of six countries, plus the European Union, that took the lead in developing this reactor technology, according to NE officials. NE chose to pursue high-temperature gas- cooled reactors because, according to NE officials, laboratory staff, and NE documents, they met the criteria of its advanced reactor programs, including potential improvements over existing light water reactors in safety, economic viability, and their promise for reducing greenhouse gas emissions. NE officials told us that the high-temperature gas-cooled reactor was also chosen because it had operating history in the United States and throughout the world, which meant that many important technical details had already been resolved. At the time, NE concluded that its research would build on existing operational experience, thus increasing the likelihood that high-temperature gas-cooled reactors could be developed and commercialized. NE envisioned its research would further demonstrate the technical and economic viability of the high- temperature gas-cooled reactor technology. Pursuant to EPAct 2005, NE is to undertake research, development, design, construction, and operation of an advanced nuclear reactor prototype with a targeted completion date of September 30, 2021. From 2005 to 2011, under Phase 1 of the NGNP Project, NE spent more than $500 million on R&D in support of high-temperature gas-cooled reactor technologies. This research was conducted primarily at DOE’s national laboratories, and it focused on preliminary reactor design work, developing new reactor fuel, and testing high-temperature materials. The NGNP Project also collaborates with universities and industry on some R&D activities, and NE has coordinated with NRC to conduct R&D necessary to design and license high-temperature gas-cooled reactors in the United States. The last high-temperature gas-cooled reactor was licensed in the United States in 1973, and many questions remain about the process by which NRC will consider license approval for these reactors. According to NE and NRC officials, NE began consulting with NRC in the mid-2000s as NRC worked to (1) update its policy on the regulation of advanced reactors and (2) produce a report to Congress on its advanced reactor licensing strategy for NGNP, which was jointly published in 2008. During this time, NE shared detailed technical information and supporting materials with NRC, information meant to mitigate risks associated with the licensing process, according to NE and NRC officials. Although DOE determined that it would not proceed to Phase 2 design activities for the NGNP Project in 2011, NE continues to fund research on some aspects of a high-temperature gas-cooled reactor, including the testing of advanced reactor fuel and high-temperature materials including graphite. We found varying views on the potential economic feasibility of the high- temperature gas-cooled reactors. Some industry representatives and a member of the National Academy of Sciences’ National Research Council that we interviewed questioned the appropriateness of NE continuing to fund high-temperature gas-cooled reactor research, citing concerns over the reactor’s economic viability in relation to the current cost of natural gas. One industry representative cited studies showing that the technology is not economically viable, as well as the fact that NE has, to date, been unable to get industrial partners interested in sharing project development costs. However, an economic analysis by the NGNP Industry Alliance—an international consortium of potential end users, owner-operators, and technology companies brought together to partner with NE and commercialize the high-temperature gas-cooled reactor— concluded that the high-temperature gas-cooled reactor would be economically viable under certain market conditions. In addition, NE has conducted a series of feasibility studies, including detailed economic analyses and studies of potential industrial applications for the process heat, to demonstrate conditions under which the high-temperature gas- cooled reactor technology becomes economically competitive. Representatives from the NGNP Industry Alliance said they believe that, with favorable trends in natural gas prices, the high-temperature gas- cooled reactor will be economically viable by the time the prototype reactor is built in the early-2030s, as is projected under the NGNP Industry Alliance’s current project time frames. The liquid-metal-cooled fast reactors, including sodium-cooled fast reactors, use liquid-metal to cool the reactor core. In addition to producing electricity, a primary benefit of these reactors is in nuclear waste management. Fast reactors can use reprocessed spent nuclear fuel as their energy source, which would help the United States reduce the amount of spent fuel from light water reactors that would need to be stored or eventually placed in a geologic repository. Fast reactors also minimize nuclear waste generation by significantly improving fuel use efficiency as compared to traditional light water reactors. NE identified the sodium-cooled fast reactor as a reactor technology of interest in 2002 under the Generation-IV International Forum because, according to NE officials, it met criteria of its advanced reactor programs, including significant advances in proliferation resistance, its potential for improving the sustainability of the nuclear fuel cycle, and its management of highly radioactive waste elements. NE continues to conduct research in support of sodium-cooled fast reactors because of their advantages for addressing nuclear waste and because reactors with the same basic technology have been built and operated in the United States and around the world since the 1960s. In fact, several other countries, including Japan and France, had or currently have operating sodium-cooled fast reactors. We also found varying views on sodium-cooled fast reactors among industry representatives, members of the National Academy of Sciences’ National Research Council, and others that we interviewed. Some members of the National Academy of Sciences’ National Research Council and an industry representative cited concerns over the safety of sodium-cooled fast reactors—including the highly reactive nature of the sodium in the presence of water and the threat of sodium fires—and believe that these safety issues may never be fully overcome. Moreover, some industry representatives and members of the National Academy of Sciences’ National Research Council told us that a fast reactor technology market does not exist in the United States, citing other more cost-effective options for storing spent nuclear fuel, including storage in aboveground casks or water pools as is the current practice at U.S. nuclear power reactors. These individuals believe that NE’s sodium- cooled fast reactor research is ill-advised. In contrast, NE officials and some industry representatives that we interviewed believe that remaining technical challenges with the sodium- cooled fast reactor can be overcome. In addition, NE officials said it is conceivable that changes in government policy for handling spent nuclear fuel in the United States will create a market for fast reactors, as it has in the United Kingdom. Moreover, NE believes that the sodium-cooled fast reactor, or other similar fast reactor technology, may be instrumental in efforts to develop a sustainable nuclear fuel cycle. In 2011, NE undertook a comprehensive study of various options for improving the sustainability of the nuclear fuel cycle, which would potentially entail using fast reactor technology and waste reprocessing to create nuclear power systems that better manage and reduce the generation of nuclear waste when compared to a once-through light water reactor fuel cycle. Although this study did not consider specific advanced reactor technologies, a closed fuel cycle may require using an advanced fast reactor technology, such as the sodium-cooled fast reactor. The fluoride-salt-cooled high-temperature reactor design takes advantage of the physical characteristics of liquid-salt coolant to enable the development of a high-temperature system that is scalable to larger power and able to operate at lower pressure and higher power density than the helium-cooled high-temperature gas-cooled reactor. With these characteristics, according to NE officials and DOE documents, these reactors could provide potential safety benefits and increased efficiency over existing light water reactors while maintaining the benefit of providing both electricity and process heat for industrial applications. NE officials told us that this mix of characteristics is the reason why NE provides limited funding research into fluoride-salt-cooled high-temperature reactor. However, the fluoride-salt-cooled high-temperature reactor technology is not very mature or well tested, and because of this is considered more of a long-range advanced reactor technology, according to NE officials. NE funds R&D into fluoride-salt-cooled high-temperature reactors mainly through NE’s Nuclear Energy University Program, with the research mainly conducted at universities across the country. Two industry representatives we interviewed took issue with NE for funding a reactor technology that is unproven and that in their view has little chance of ever being built. In response, NE officials told us that the potential benefits of fluoride-salt-cooled high-temperature reactor over other advanced reactor technologies warrant providing limited funds and utilizing university research capabilities. NE conducts or funds R&D on other advanced reactor technologies on a small scale, mainly to assess their potential and better characterize their performance capabilities. NE officials told us that most of these technologies have some unproven aspects, operate in novel ways, or have other characteristics that increase the risk associated with their development. NE supports research into some of these potentially transformative, long-term technology options through its Nuclear Energy University Program. For example, in 2013, the program funded research to assess the feasibility of an advanced reactor fueled with depleted uranium, a design offering a 30-fold increase in uranium ore utilization verses contemporary light water reactor designs. NE also funds research into promising advanced reactor technologies through the Advanced Reactor Concepts Technical Review Panel process. Through this process, NE identifies R&D needs for potentially viable advanced reactor technologies to inform NE advanced reactor R&D funding decisions. A goal of the process is to facilitate greater engagement between DOE and the nuclear industry. NE first solicited information on advanced reactor proposals from industry in February 2012, after which a review panel made up of experts from national laboratories, universities, and industry reviewed the proposals against established evaluation criteria, including safety, market attractiveness, economics, proliferation risk, waste generation, security, and potential regulatory challenges. The panel’s assessment of market attractiveness focused on the proposed technologies’ ability to be competitive in the marketplace, and it included variables like efficiency, initial capital costs, and economic factors such as construction, manufacturing, and operating costs and uncertainties, as well as the resulting cost of electricity, according to the Technical Review Panel report. The objective of the Technical Review Panel process was to evaluate the viability of the technologies, understand the R&D needs of each, and prioritize research to support development and commercialization of each. After R&D needs and priorities were identified, NE issued a funding opportunity announcement, competitively selected four projects, and provided a total of $3.5 million in funding for those projects, according to NE officials. Many nuclear industry representatives we interviewed applauded NE’s effort and told us that this process was an effective way for NE to collaborate with industry and that it begins to address a long-standing industry concern that NE’s R&D efforts did not coordinate with industry or meet industry needs. However, these industry representatives also stated that the $3.5 million in R&D funding was insufficient to meaningfully address the need for collaboration between NE and industry as it was enough to fund a very small number of R&D activities. Notably, Congress provided NE with an additional $12 million to support a continuation of this effort. According to NE officials, NE has issued another industry solicitation and will use the information gathered to make additional industry cost-shared R&D awards early in fiscal year 2015. NE also conducts research on advanced small modular reactors with the goal of supporting the development of innovative small modular reactor designs that offer improved safety, functionality, and affordability. These R&D efforts support advanced small modular reactors that offer simplified operation and maintenance, more efficient energy conversion, and increased proliferation resistance and security. More specifically, NE funds research on advanced sensors, instrumentation and controls, control systems for multiple units, advanced materials, and other major system components. In addition, NE funds efforts to create standards and codes for small modular reactor materials to support the eventual licensing of these advanced reactor technologies. In 2012 and 2013, through its Small Modular Reactor Licensing Technical Support program, NE issued funding opportunity announcements for cost- sharing with industry for the development of small modular reactor designs—including small modular reactors based on light water reactor technology, as well as advanced small modular reactors—to support the program’s vision to provide additional nuclear power options that offer more flexibility in financing, siting, and end-use applications than large light water reactor designs. Under the cost-sharing arrangement for each funding opportunity, DOE is supporting design development, first-of-a- kind engineering, experiments, and analysis in support of gaining design certification approval from NRC for the small modular reactors so that commercial deployment of the first small modular reactor can begin. Industry proposals under these announcements were judged by independent selection panels based on a series of criteria, including the extent to which the design incorporates safety, operability, efficiency, economics, and security characteristics that exceed the capabilities of current reactor designs; the likelihood of expeditiously achieving design the overall quality of the project plan and certification and deployment;business approach; and other factors. Based on our review of the two funding opportunity announcements, however, we found that the funding opportunity announcements differed in the economic information that NE required for proposals. In the 2012 announcement, NE more directly addressed economics and marketability by requiring applicants to propose business plans “to meet expanding domestic electricity requirements at a competitive price” and to “provide their plan to achieve successful commercial deployment” of the technology. By contrast, the 2013 announcement indicated that the economic criteria used to evaluate proposals would be based on the designs’ construction, fabrication, deployment, and operational costs. NE officials told us that these criteria indirectly assess economics and marketability of these technologies, and that the type of economic information received from applicants in 2012 was very preliminary and did not provide a good discriminator with which to evaluate proposals. In addition, these officials stated that it was incumbent on the applicants to ultimately assure marketability as they were providing most of the funding and have a profit motive. For both funding opportunity announcements, the panel’s evaluation resulted in choosing a small modular reactor design based on conventional light water reactor technology. Some industry representatives, members of the National Academy of Sciences’ National Research Council and the Nuclear Energy Advisory Committee, and NE officials told us that this selection was an appropriate choice because it has a significantly better chance of being licensed and constructed in the required time frame, as compared to advanced small modular reactor designs that are not based on conventional light water reactor technology. While the broad goals of NE’s advanced reactor R&D efforts are to improve the economics, safety, and proliferation resistance of nuclear power plants, the R&D efforts also aim to develop advanced reactor technologies that can prepare the United States to address policy objectives such as reducing nuclear waste and greenhouse gas emissions. NE’s approach to advanced reactor R&D is to conduct research in support of multiple advanced reactor technologies. According to NE officials and documents, because NE’s approach to advanced reactor R&D has multiple goals and seeks to address several different policy objectives, NE works on multiple technologies simultaneously. A key objective of NE’s advanced reactor R&D efforts is to conduct research to remove technology barriers or reduce technology risks, while collaborating with industry and academia, with the ultimate goal for industry to take the results of NE’s research to the next step of development and commercialization. NE focuses on R&D that industry does not have the means to carry out, according to NE officials, with the expectation that the research will reduce financial risks to industry and thereby increase the affordability of industry investment in new nuclear technologies. In addition, NE engages and collaborates with NRC on issues related to the eventual licensing of advanced reactors, including understanding the likely scope and extent of R&D necessary to support the licensing process. While advanced reactors are attractive for many reasons, NE carries out research on a variety of reactors because, in part, different reactor types can address particular objectives, according to NE officials. For instance, fast reactors can be better at addressing the nuclear waste issue than some other advanced reactors, while high-temperature gas-cooled reactors provide process heat and may be a better solution for addressing greenhouse gas emissions. According to NE officials, the development of fast reactors, such as the sodium-cooled fast reactor, is likely to play a critical role in managing spent nuclear fuel if and when the United States decides to reprocess and use its spent nuclear fuel rather than store it at reactor sites, as is the current practice at U.S. nuclear power reactors, or isolate it in a geologic repository underground, as has been proposed. To remain aware of industry’s R&D needs and international nuclear energy developments, NE regularly collaborates with industry and international organizations, according to NE officials and NE documents. NE officials told us that NE regularly collaborates with industry on specific R&D projects by sharing technical data and information. For example, NE is currently collaborating with industry on advanced fuels and materials, among other things. NE officials told us they work with industry and have conversations regarding specific R&D activities. According to NE officials and some industry representatives that we interviewed, this type of collaboration has been increasing in recent years, and one industry representative stated that such collaboration is critically important to ensuring that NE’s activities are relevant for industry. One way that NE has recently collaborated with industry, according to NE officials, was through the Advanced Reactor Concepts Technical Review Panel process. Some industry representatives we talked to stated that this review panel process was beneficial to both industry and NE because it helped inform NE of industry R&D needs and because it has provided some funds to industry to carry out research on promising new technologies. Industry officials also told us that the process has opened up some new channels of communication between NE and industry. However, industry representatives also stated that, although this collaboration with industry is beneficial, NE could be doing more to ensure that its R&D is more fully aligned with industry needs. For example, according to one industry official, NE conducts some research that, while interesting and potentially beneficial, has little utility for industry’s current needs. NE carries out international collaboration through ongoing meetings of the Generation-IV International Forum and through the International Atomic Energy Agency, through the Organisation for Economic Co-operation and Development (OECD), and through bi-lateral agreements with many countries around the world, including Canada, the Russian Federation, the People’s Republic of China, Japan, the Republic of Korea, and countries in the European Union. NE officials cited several examples of such collaboration, such as with the People’s Republic of China on high-temperature gas-cooled reactors; with France on their sodium-cooled fast reactor development project; with Japan on advanced materials for sodium-cooled fast reactors; and with the Republic of Korea on sodium-cooled fast reactors. NE’s approach to advanced reactor R&D provides several advantages, primarily flexibility in responding to changes in future U.S. energy policy or other circumstances, according to NE officials. The officials said they believe that conducting research in support of multiple advanced reactor technologies gives the agency the flexibility to respond to external factors affecting the direction of their advanced reactor R&D efforts, including changes in U.S. energy policy, energy markets, or other areas. Specifically, NE officials told us that the current approach positions NE to respond to changes in U.S. energy policies, such as policies for managing the nation’s nuclear waste or controlling greenhouse gas emissions. Changes in either of these policies would affect the direction of NE’s advanced reactor efforts, according to NE officials. For instance, a policy calling for the United States to manage nuclear waste by reprocessing spent nuclear fuel and reusing it as reactor fuel would result in NE focusing efforts and concentrating resources on developing and deploying fast reactor technologies. NE officials told us that the current practice of storing nuclear waste in aboveground facilities at nuclear power plants across the country will eventually be changed, and waste will either be moved to long-term underground repositories, or reprocessed and burned in fast nuclear reactors. NE officials told us that ongoing research into fast reactor technologies is important so that NE is positioned to react to changes in U.S. policy toward the handling of nuclear waste, including waste that has already been generated and waste that continues to be generated. Similarly, NE officials said that policies that affect the prices of various energy sources would have an effect on the commercial attractiveness of high-temperature reactors, including high-temperature gas-cooled reactors. For instance, officials cited the possibility of the imposition of a carbon tax to control greenhouse gas emissions, or the possibility of natural gas prices rising, either of which would make nuclear energy more economically competitive and increase the attractiveness of the high- temperature gas-cooled reactors that produce both electricity and process heat for industrial applications. These industrial applications currently rely heavily on natural gas or coal plants as their source for high-temperature process heat. NE officials stated that natural gas prices in other countries are already at levels where the high-temperature gas-cooled reactors are projected to be economically competitive and that this has resulted in some interest from outside the United States in the development of this technology. According to NE officials, another advantage of NE’s approach to advanced reactor R&D is that NE is able to maintain an employee base with knowledge and expertise on a wide variety of reactor technologies. NE officials told us that maintaining staff expertise is important so NE can continue to conduct research on the various technologies, train the next generation of scientists and engineers, and be ready to support the production of prototype reactors when the time comes. In addition, maintaining a level of expertise in a variety of advanced reactor technologies means that NE can engage with, monitor, and support other countries as they develop advanced reactor technologies. These officials said this is important because other countries are actively developing advanced reactor technologies, and the United States needs scientists that can understand how those reactors operate, in part, to judge their safety and nuclear proliferation risks. NE officials also said that conducing R&D on several types of advanced reactors simultaneously, rather than focusing on a single reactor type, also gives NE the ability to fund R&D supporting promising but unproven reactor technologies. For instance, NE is funding limited research on lead-cooled fast reactors, which offer the potential for improved safety and proliferation resistance over other advanced reactor technologies, but they have some unproven technologies and components, according to NE officials. Similarly, NE is funding research in support of an advanced small modular reactor based on fast reactor technology that would potentially address the nuclear waste issue and also provide process heat for industrial applications. NE officials said that it is important to have funds available to support these and other potentially game-changing technological breakthroughs. However, in its June 2013 report, the Nuclear Energy Advisory Committee was critical of NE’s approach, saying that NE needs to better prioritize its R&D efforts on a smaller number of advanced reactor technologies to focus research funding on the ultimate goal of deploying an advanced reactor prototype. Although NE selected the technology to develop under the NGNP Project, many members of the National Academy of Sciences’ National Research Council, members of the Nuclear Energy Advisory Committee, and industry representatives we interviewed agree with NE’s approach to advanced reactor R&D because the time is not right for NE to move to the deployment phase. For instance, representatives from industry and the Nuclear Energy Advisory Committee told us that uncertainties around current policies for handing nuclear waste and controlling greenhouse gases do not make a compelling case for choosing an advanced reactor technology to deploy as a prototype. The 2008 National Academy of Sciences’ National Research Council review of NE’s advanced reactor R&D efforts agreed with NE’s approach to advanced reactor R&D, saying that there are several policy matters and other questions—undetermined nuclear waste management options, unformulated environmental policy, ongoing work of other countries on advanced technologies, and unclear nonproliferation regimes—that will affect NE’s decisions and priorities. This review team stated that, given these unknowns, it would be premature to select a winning technology from among current options. In addition, in January 2012, the President’s Blue Ribbon Commission on America’s Nuclear Future recommended having the United States continue multiple near-term (i.e., light water reactor) and long-term (e.g., small modular reactor, sodium-cooled fast reactor, high-temperature gas- cooled reactor) R&D efforts until NE could defensibly select technologies that would meet certain regulatory and policy requirements (e.g., safety, environmental protection, security, and nonproliferation). Moreover, members of the Nuclear Energy Advisory Committee and the National Academy of Sciences’ National Research Council, and representatives from industry told us that current NE funding levels would prohibit NE from deploying a prototype reactor even if NE chose an advanced technology to deploy. Some of them said that NE is correctly positioning itself to be prepared to deploy a prototype reactor in the long-term as policies or energy markets change. One Nuclear Energy Advisory Committee representative said that United States could focus its advanced reactor R&D efforts quickly in response to a policy change or other congressional direction, provided that NE also saw increased funding. NE’s uses internal and external reviews to set program and funding priorities for advanced reactor R&D and to evaluate progress toward program goals. However, NE does not have a strategy for overcoming barriers to deploying an advanced nuclear reactor prototype, increasing the likelihood that such a reactor will not be built by the 2021 target date specified in EPAct 2005. Not deploying a prototype carries certain risks, including waning U.S. influence in the safe operation of nuclear plants internationally and potential loss of certain knowledge and expertise. NE takes a number of steps to plan and prioritize its advanced reactor R&D efforts and evaluate progress toward program goals. Before its annual program planning meetings, NE and national laboratory staff develop a list of R&D efforts considered to be priorities. NE management reviews this information in light of program goals, including long-term goals described in NE’s 2010 R&D Roadmap, program funding, and schedules, according to NE officials. Once the research priorities are established and approved by management, the individual laboratories develop detailed work plans, which describe the objectives and scope of the work to be performed. These work plans are reviewed to ensure that the proposed work is aligned with NE’s mission and that the work can be accomplished within the allotted budget and time frames, according to NE officials. All of the approved work plans are then entered into NE’s performance management system—the Program Information Collection System—which allows NE to track progress toward budget and schedule milestones on an ongoing basis. According to a laboratory staff member, this system tracks progress toward short-term goals—on a monthly basis—and long-term goals—on yearly, 3-year, and 5-year time frames. NE monitors and evaluates its advanced reactor R&D activities on an ongoing basis through the Program Information Collection System and conducts program reviews—monthly, quarterly, and annually—to assess progress toward program goals, according to NE officials. For example, officials from the Advanced Reactor Concepts and Advanced Small Modular Reactor subprograms hold monthly progress review meetings to discuss, among other things, program updates, technical highlights, and budget and milestone status updates. The officials use monthly status tracking reports generated by the performance management system as part of these reviews, in which officials review cost and schedule performance data. In addition to the monthly meetings, officials from the Advanced Reactor Concepts and Advanced Small Modular Reactor subprograms typically conduct four in-depth reviews of each year, according to NE officials. These reviews focus on one or more specific areas of research, and officials discuss progress toward goals, important issues or problems, and plans going forward. For example, the meeting minutes from the quarterly review of the Advanced Reactor Concepts and Advanced Small Modular Reactor subprograms in July 2013 show that officials discussed accomplishments and also priorities for the upcoming fiscal year, and conducted in-depth discussions of certain program areas and overviews of others. In addition, NE conducts annual reviews of activities across multiple subprograms and topics to ensure that NE’s efforts are complementary and nonduplicative, and also to gain insight into areas of potential collaboration. For example, during its annual review of the nuclear reactor R&D efforts in March 2014, NE officials discussed progress on fuels for the high-temperature gas-cooled reactor, advanced reactor licensing, and advanced reactor materials for small modular reactors, among other things. On a less-formal basis, management officials at the national laboratories are in frequent communication with NE management through weekly teleconferences to provide regular progress updates and provide information on unforeseen circumstances or challenges, according to NE officials. NE also takes steps to coordinate efforts across its R&D programs and subprograms to leverage experience and funding, as well as to reduce redundant R&D activities. Officials from the Advanced Reactor Concepts and Fuel Cycle subprograms stated that they frequently coordinate with each other because their R&D efforts are interdependent. For example, the Fuel Cycle subprogram is conducting R&D on accident tolerant fuels that will be used for advanced reactors, so coordination between the Fuel Cycle subprogram and the Advanced Reactor Concepts subprogram is crucial, according to NE officials. To further help ensure that R&D efforts are coordinated and to minimize redundancies, NE established the Nuclear Energy Enabling Technologies program in fiscal year 2011. The program is designed to conduct R&D on crosscutting technologies that complement NE’s activities to support and enable the development of new advanced reactor designs and fuel cycle technologies. NE created the program to better coordinate and integrate R&D activities after officials identified some similar efforts being performed on crosscutting areas, such as materials, across more than one program, according to NE officials. Through this program, NE has awarded over $9 million to support R&D projects focused on reactor materials, advanced sensors and instrumentation, and advanced methods for manufacturing, among other things. NE determines which R&D efforts are conducted by the Nuclear Energy Enabling Technologies program by reviewing R&D proposals submitted by different groups, including the national labs, universities, research institutions, and industry, according to NE documents. Specific R&D projects are selected based on common needs of programs and subprograms, with each selected project required to support at least three programs or subprograms. Others also periodically conduct external reviews of NE’s advanced reactor R&D to inform the planning and prioritization efforts for and assess the progress of its R&D activities. Most prominently, according to NE officials, the Nuclear Energy Advisory Committee provides NE with independent advice and recommendations on complex science and technical issues that arise in planning, managing, and implementing NE’s R&D activities. The Nuclear Energy Advisory Committee typically meets twice annually with NE management to discuss its reports and recommendations. The Nuclear Energy Advisory Committee’s subcommittee on Nuclear Reactor Technology is intended to provide expert guidance to NE on both the short-term and long-term direction of its R&D efforts on reactor technologies. NE officials and Nuclear Energy Advisory Committee representatives told us that the committee has been beneficial in providing expertise to NE and that NE has been responsive to the committee’s recommendations. In 2011, a Nuclear Energy Advisory Committee review of NE’s R&D efforts on the NGNP Project determined that NE should not move forward with the complete set of Phase 2 activities of the project, citing constraints imposed by EPAct 2005 and difficulties finding industry partners. Subsequently, in 2011, NE informed Congress that it would not proceed with Phase 2 design activities of the NGNP Project until circumstances warranted a change in direction. NE’s efforts have also been reviewed by other outside entities, including the Secretary of Energy’s Advisory Board, which provides advice and recommendations to the Secretary of Energy on various topics. For example, in 2012, the Secretary of Energy requested the board identify areas in which standards for safety, security, and nonproliferation should be developed for small modular reactors; identify challenges, uncertainties, and risks to commercialization; and provide advice on approaches to manage these risks and accelerate deployment of these reactors. The board determined that the commercialization of small modular reactors was likely to produce multiple benefits for the country, including helping provide for a more reliable power grid with more widely distributed power generation once current light water reactors are retired; supporting clean generation and reduced carbon emissions; and helping preserve influence of the United States on nuclear nonproliferation issues. The board stated that to deploy small modular reactors widely, the nation must develop a robust small modular reactor industry that can manufacture cost-competitive small modular reactors that meet U.S. regulatory standards, and that the primary risk for commercialization of these reactors, beyond design certification and licensing, is the ability to drive the plant costs down sufficiently to become competitive with other energy sources, such as natural gas, without compromising safety and security. To develop this industry, according to the board, the U.S. government will likely have to play a significant financial role beyond the Small Modular Reactor Licensing Technical Support program. Although NE’s primary mission is to advance nuclear power through research, development, and demonstration, its deployment of an advanced reactor prototype under the NGNP Project is unlikely in the foreseeable future. Among the different advanced reactor technologies currently supported by NE R&D, the high-temperature gas-cooled reactor technology is the most likely to be deployed and commercialized in the near term, according to an NE planning document. NE officials said that the likelihood is based on the wide range of potential market applications to industry of electricity and process heat and is supported by substantial government investments in the technology’s development, including testing of materials, fuels, and other components. NE has consulted with the NGNP Industry Alliance on the project, including discussing the alliance’s plan for proceeding with development of a prototype reactor. In addition, NE has done market research on potential industrial applications for the process heat. Further, NE established a contract in 2013 with NGNP Industry Alliance to develop economic analyses detailing how industry may best engage in developing and commercializing high- temperature gas-cooled reactor technologies. According to laboratory staff, development and testing of the advanced fuel for high-temperature gas-cooled reactor has progressed positively, and other research on high- temperature materials and other components has produced positive results. In 2011, DOE informed Congress that it would not proceed with Phase 2 design activities for the NGNP Project until circumstances warranted a change in direction. According to NE officials, laboratory staff, and representatives of the NGNP Industry Alliance, the NGNP Project remains hindered by several barriers. Specifically, barriers are as follows: Cost-share requirements. DOE’s attempts to implement the cost- share provisions in EPAct 2005 for the NGNP Projects have met with resistance from industry, according to DOE officials and industry representatives, because of differences in how EPAct 2005 is interpreted by NE and by the NGNP Industry Alliance. EPAct 2005 provides that activities by industry must be cost-shared in accordance with the research, development, demonstration, and commercial application cost-sharing provisions established under section 988 of the act. Specifically, the Secretary must require cost-sharing in accordance with this cost-sharing provision when carrying out a research, development, demonstration, or commercial application program or activity that is initiated after August 8, 2005. For applied research and development activities, industry generally is to provide not less than 20 percent of the cost, but the cost-share may be reduced or eliminated if the Secretary determines doing so is necessary and appropriate. For demonstration and commercial application activities, industry generally is to provide not less than 50 percent of the cost, but the cost-share may be reduced if the Secretary determines that doing so is necessary and appropriate considering any technological risk relating to the activity. However, according to NE officials and representatives from the NGNP Industry Alliance, they have been unable to come to an agreement on implementing the cost-share requirement for funding the remainder of the NGNP Project because of disagreement on the applicable cost- share levels and how and when the cost-share would be applied to specific activities or project phases. The NGNP Industry Alliance favors meeting the total cost-share requirement by measuring costs over the course of the remainder of the NGNP Project rather than on an annual basis. According to the NGNP Industry Alliance cost-share proposal from November 2009, the alliance suggested assessing a lower industry cost-share in the first years of the project and increasing the industry share over time, with industry paying the vast share of the annual project costs by the final years of the project. Under this proposed scenario, the alliance states that the cumulative industry contribution would meet the overall cost-share requirement, and NE’s portion of the development costs would largely be paid up front. DOE did not fully consider the alliance’s proposal for a multiyear approach to the cost-share requirement, according to NE officials, because the project was not proceeding at the time due to funding constraints, competing program priorities, and other factors Representatives from the NGNP Industry Alliance told us that cost- sharing the development activities on a annual basis is not feasible because it would mean a substantial layout of funds with a very long payoff time and would expose the industry partners to significant financial risks. NGNP Industry Alliance representatives said these risks include unknowns associated with obtaining regulatory approval from NRC for the prototype reactor, and the risk that NE will not be provided sufficient funds through congressional appropriations to meet its obligations. NE officials told us that they understand the NGNP Industry Alliance’s perspective and had been attempting to work out an agreement when DOE decided not to proceed to Phase 2 of the project. Representatives from the NGNP Industry Alliance told us that the impasse over cost-sharing needs to be resolved in order to proceed with the NGNP advanced reactor prototype. Site requirement. According to NE officials, laboratory staff, and NGNP Industry Alliance representatives, the EPAct 2005 requirement that the NGNP reactor prototype be located at the Idaho National Laboratory is another barrier to proceeding with the project. Representatives of the NGNP Industry Alliance said that part of the economic benefit of the reactor prototype would be the use of the high-temperature process heat that results from operating the high- temperature gas-cooled reactor. Alliance representatives said that building the reactor at Idaho National Laboratory foregoes the economic benefit because industries that could potentially use process heat are not located near the laboratory, making the overall prototype reactor less economically attractive. Instead, they told us that the NGNP prototype reactor should be located where the petrochemical or other industries that use process heat could benefit from it. This is consistent with a finding in the Nuclear Energy Advisory Committee’s 2011 Phase 1 review of the NGNP Project. In its review, the committee stated that the business case to optimize NGNP use for process heat applications and electricity indicates that a site in proximity to a wide range of industrial uses is more appropriate and that a siting at the Idaho National Laboratory will not support a partnership agreement with industry. If industry cannot realize an economic benefit from the prototype reactor, it is unlikely that industry would support the reactor being built at the Idaho National Laboratory. Fiscal constraints and competing priorities. NE officials, laboratory staff, industry representatives, and Nuclear Energy Advisory Committee members that we interviewed told us that NE’s recent funding levels are inadequate to move forward with the NGNP prototype reactor. NE officials and the NGNP Industry Alliance both estimate that NE’s share of NGNP Project could amount to as much as $2 billion over the remainder of the project, in which costs would be shared with industry. Under the NGNP Industry Alliance’s proposal, DOE would provide between approximately $170 million and $330 million annually over the first 6 years of the proposed plan. This compares to the total funding for Advanced Reactor Concepts subprogram of about $60 million in 2014. NE officials, Nuclear Energy Advisory Committee members, and laboratory staff told us that NE’s funding levels are inadequate to move forward toward a prototype reactor, even if it were to focus its resources on one effort. Furthermore, an NE officials and a member of the Nuclear Energy Advisory Committee told us that current priorities to fund R&D aimed at sustaining the existing light water reactors and focusing on the design and licensing of small modular reactors would have to shift in order to make more funds available for advanced reactor R&D. Competition from natural gas. NE officials, some industry representatives, and Nuclear Energy Advisory Committee members that we interviewed told us that low natural gas prices have made nuclear energy less attractive economically over the past few years, reducing overall interest in nuclear power options. NE officials and industry representatives said that the current atmosphere is not conducive to partnering with industry on advanced reactor projects, including the NGNP Project. The Secretary’s October 2011 letter to Congress did not specify which conditions might warrant a change in program direction—that is, proceed with Phase 2 of the NGNP Project—and NE has not developed a strategy for overcoming the identified barriers hindering the resumption of the project or a set of criteria for determining when a change in program direction would occur. An NE management official that we interviewed stated that conditions that would warrant a change in direction might include Congress legislating a carbon tax, a rise in price of natural gas, or an increase in funding for the NGNP. In addition, developing such a strategy may involve consultation with the Nuclear Energy Advisory Committee and others, including independent nuclear experts. Without a strategy for overcoming the barriers hindering the restart of the NGNP Project and identifying conditions NE can use for determining when a change in program direction would occur, it will be difficult for NE to demonstrate that it is poised to move forward, and it risks the project being on hold indefinitely. According to EPAct 2005, NE was required to select the initial reactor design parameters to be used for the NGNP Project by September 30, 2011, or submit a report to Congress establishing an alternative date for making the selection. However, the Secretary’s 2011 letter to Congress did not specify initial design parameters for the NGNP or specify an alternative date for making a selection. Instead, the letter stated that the initial design parameters had not yet been selected and that such a selection would be made by the public-private partnership once it is formed. Without selecting initial reactor design parameters or establishing a date to make a selection as required by EPAct 2005, it is not clear if or when NE is going to take this next step in deploying the NGNP prototype reactor. In addition, after the Secretary’s decision not to proceed with Phase 2 design activities, NE’s engagement with NRC on licensing issues has decreased. NE maintains a team seeking to engage NRC on NGNP licensing issues, according to NE officials, but NRC has reassigned staff from its NGNP work and engages with NE on a minimal basis, according to NRC officials. NRC officials said that they cannot proceed substantively further in developing a licensing framework until NE has developed a specific design for an advanced reactor technology. Furthermore, not deploying an advanced reactor prototype carries some risks, according to some industry representatives, Nuclear Energy Advisory Committee members, and NE officials we interviewed. Specifically, these risks include (1) falling behind other countries in advanced reactor development and losing market share in the global market for nuclear energy; (2) losing influence on which reactor technologies are developed, which raises safety and nuclear proliferation concerns; and (3) losing its ability to manufacture the components necessary to construct nuclear plants. By not deploying an advanced reactor prototype, the United States risks falling behind other countries—such as Japan, Russia, China, South Korea, and France—that are actively working to deploy and commercialize advanced reactors, according to a Nuclear Energy Advisory Committee report. For example, Russia currently has two sodium-cooled fast reactors—one experimental and one commercial—in operation and is developing or constructing additional sodium-cooled fast reactor technologies, and it has plans to export reactor technology to other nations. In addition, China has an operating sodium-cooled fast reactor and high-temperature gas-cooled reactor on a test reactor scale and is in the process of building a prototype high temperature gas-cooled reactor, according to NE officials. In potentially losing its global leadership position in developing nuclear technologies, the United States risks losing market share in the global market for nuclear energy, which would cost the U.S. economy high-paying jobs in the nuclear industry, according to these individuals. By falling behind other countries in advanced reactor development, the United States also risks losing influence on determinations of which reactor technologies are developed, with implications for the safety of reactor operations worldwide, as well as implications for how resistant the technologies are to nuclear proliferation—including the safe, effective disposal of nuclear waste—according to NE officials, laboratory staff, and Nuclear Energy Advisory Committee members that we spoke to. Specifically, according to members of the Nuclear Energy Advisory Committee, if the nation is not leading the development of advanced reactors, other countries may operate reactors that do not meet the highest safety standards and may not take adequate steps to ensure nuclear waste is handled appropriately and properly secured. Similarly, the United States risks losing its ability to manufacture the components necessary to construct nuclear plants, according laboratory staff and Nuclear Energy Advisory Committee members that we interviewed. In addition, by not deploying an advanced reactor, NE risks losing staff, including engineers with the knowledge and experience necessary to design and build advanced reactors, according to an NE official and laboratory staff. For example, without a specific goal of developing an advanced reactor prototype, NE staff are more likely to leave NE for jobs with a better sense of mission, according to these officials. Energy demand in the United States is expected to rise considerably over the coming decades, and concerns remain over energy security and greenhouse gas emissions from the burning of fossil fuels. While nuclear energy accounts for about 20 percent of electricity generation in the United States and produces no air pollution or greenhouse gases, the accident at Japan’s Fukushima Daiichi commercial nuclear power plant in March 2011 highlighted ongoing concerns about the safety of nuclear plants, and concerns also exist about potential threats of nuclear proliferation and terrorism. With this in mind, it is important that nuclear power plants continue to evolve and provide energy economically, while also addressing safety and proliferation concerns. By conducting nuclear reactor R&D, NE has a critical role to play as it supports existing light water reactors, as well as a new generation of advanced nuclear reactors. However, in 2011, DOE informed Congress that it would not proceed with Phase 2 of the NGNP Project until circumstances warranted a change in direction, and the project remains hindered by several barriers, including the cost-share and site requirements of EPAct 2005. NE officials have attempted to work out a cost-share agreement with the alliance, but different interpretations of the cost-share requirements by DOE and the NGNP Industry Alliance have created an impasse, and no agreement had been reached before DOE determined that it would not proceed to Phase 2 of the project. Another barrier to proceeding with the project is the EPAct 2005 requirement that the NGNP reactor prototype be located at the Idaho National Laboratory. Building the reactor there foregoes the economic benefit of the reactor’s process heat because industries that could potentially use the prototype reactor’s high-temperature process heat are not located near the laboratory. If industry cannot realize an economic benefit from the prototype reactor, it is unlikely that industry would support the reactor being built at the Idaho National Laboratory. Moreover, DOE’s October 2011 letter notified Congress that the department had selected the NGNP technology, as required by EPAct 2005, acknowledged that the department had not selected the initial design parameters for the NGNP or identified the date upon which it would do so by September 30, 2011, as required by EPAct 2005, and essentially put Phase 2 of the NGNP Project on hold until conditions favorable to completing the NGNP warranted a change in direction. However, the letter did not specify which conditions might warrant a change in program direction—that is, proceed with Phase 2 of the NGNP Project—and NE has not developed a strategy for overcoming the identified barriers hindering restarting the project or which contains such conditions. Without a strategy for overcoming the barriers hindering the restart of the NGNP Project and identifying conditions NE can use for determining when a change in program direction would occur, the project may be on hold indefinitely. Furthermore, without selecting initial reactor design parameters and reporting the parameters to Congress, as required by EPAct 2005 for completing Phase 1 of the project, or establishing a date to make a selection, it is not clear if or when NE is going to take this next step and proceed with Phase 2 of the NGNP Project. To better prepare the Department of Energy to meet the requirement of the Energy Policy Act of 2005 to deploy the NGNP prototype reactor, we recommend that DOE take the following two actions: Develop, in consultation with the Nuclear Energy Advisory Committee and independent nuclear experts, as appropriate, a strategy to proceed with Phase 2 of the NGNP Project, outlining conditions that will warrant a change in program direction, remaining research and development activities, projected project budget and schedule, and steps necessary to overcome barriers to successful completion of the NGNP Project. Provide a report to Congress complying with the statutory requirement for Phase 1 of the NGNP Project by providing initial design parameters or a date for providing them. The report should also provide an updated status of the issues DOE identified in its 2011 letter to Congress and outline any additional barriers to proceeding with Phase 2 of the project, including the status of the establishment of a public-private partnership; the project strategy, including conditions that would warrant restarting the project; and a legislative proposal, if necessary, to address any barriers to proceeding with the project, including the site and cost-share requirements. We provided a draft of this report to DOE for review and comment. In written comments, DOE’s Assistant Secretary for Nuclear Energy, responding on behalf of DOE, wrote that DOE agreed in principle with our first recommendation and respectfully disagreed with our second recommendation. DOE’s written comments on our draft report are reproduced in appendix II. In addition, DOE provided technical comments, which we incorporated in the report as appropriate. In its comment letter, DOE stated that it agreed in principle with our recommendation that it develop a strategy to proceed with Phase 2 of the NGNP Project. Moreover, DOE stated that its current strategy is to continue updating analyses of requirements for successful commercialization of reactor technology to reflect changing market conditions, research and development accomplishments, and the maturity of the licensing framework. However, this strategy does not outline steps DOE could take to proactively overcome the barriers hindering the resumption of the NGNP Project, nor does it outline criteria for determining when a change in program direction would occur. We continue to believe that developing a strategy to proceed with Phase 2 of the NGNP Project is important because without a strategy it will be difficult for NE to demonstrate that, upon completion of Phase 1, it will be poised to develop a final design and construct and operate the prototype reactor. Moreover, not having a strategy for proceeding with Phase 2 could result in the project being on hold indefinitely. DOE respectfully disagreed with our recommendation that it provide a report to Congress that, among other things, provides initial design parameters or a date for providing them and outlines barriers to proceeding with Phase 2 of the project. DOE stated that such a report was not advisable or useful, or necessary as a means for the Department to comply with the statutory requirements for Phase 1 of the NGNP Project, and further stated that the Department is in compliance with the relevant statutory requirements. As DOE explained, it reported to Congress in 2011 that while it had selected the hydrogen production technology, as required by EPAct 2005, it had not selected the initial design parameters for the project and that based on the recommendations of the Nuclear Energy Advisory Committee, fiscal constraints, competing priorities, projected cost of the prototype, and the inability to reach agreement with industry on cost sharing, it would not proceed with Phase 2 design activities at that time. Instead, it would continue to focus on high-temperature reactor R&D activities and establishment of a public-private partnership, among other things, until conditions warranted a change in direction. DOE did not, however, establish an alternative date for selecting the initial design parameters, as EPAct 2005 required. Rather, it stated that selection of initial design parameters would be made by the public-private partnership once it is formed. Given that almost 3 years have passed since the letter to Congress, we believe that the recommended report is warranted and would serve to inform Congress of the status of the NGNP Project and provide transparency and accountability regarding DOE’s intentions for completing Phase 1 and proceeding with Phase 2 of the project. For example, by providing a firm date for selecting the initial design parameters of the NGNP prototype reactor, DOE could be held accountable to meeting that date or could engage in a discussion about whether and why that date should be further extended. Similarly, an updated report to Congress could include a candid description of the ongoing barriers to moving forward, which could spur discussions resulting in legislation or other remedies to mitigate these barriers. We are sending copies of this report to the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Frank Rusco at (202) 512-3841 or [email protected] or Dr. Timothy M. Persons at (202) 512-6522 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Nuclear reactors generate heat by sustaining a fission chain reaction in nuclear fuel. Nuclear fission reactions can occur when a neutron strikes the nucleus of a large atom, causing that nucleus to split, or fission. The result of a fission reaction is typically two fission fragments, or smaller nuclei; 2 or 3 new fast-moving neutrons; and significant heat. In a nuclear reactor, the large atoms used for fission are typically the fissile isotopes uranium-235 or plutonium-239, and the new neutrons produced by a fission reaction are used to initiate new fission reactions, resulting in a sustained fission chain reaction. This heat generated by this fission reaction is typically used to create steam and drive a steam turbine to generate electricity. Some reactors may also operate at particularly high temperatures and can use the heat to either generate electricity or to supply process heat that can be used for various industrial processes, replacing other heat sources such as natural gas. Nuclear reactors typically fall into one of two types based on the neutron spectrum, or neutron energies, at which the fission reactions occur as follows: Thermal reactors optimize the fission reaction rate in their fuel. This is done by slowing down, or moderating, the high-energy fast neutrons that are the products of fission reactions. This thermalization of the fast neutrons increases the likelihood that a neutron will initiate a fission reaction. Currently deployed light water reactors, including pressurized water reactors and boiling water reactors, are thermal reactors. Fast reactors, by contrast, do not moderate the fission neutrons, leaving them as fast neutrons. A fast neutron has a lower likelihood of initiating a fission event than a slow neutron, so the chain reaction can be more difficult to sustain, but it has the benefit of producing more neutrons when fission does occur. These surplus neutrons, as compared to the number of neutrons produced in thermal reactors, allow fast reactors to be more effective than thermal reactors at creating, or breeding, new fuel through neutron bombardment of uranium-238 (creating plutonium-239). Fast reactors optimized for fuel production in this manner are called fast breeder reactors and can produce more fuel through breeding than they consume. Fast reactors may also use spent fuel from other nuclear reactors as fuel and thereby reduce long-term fuel disposal needs. While there are a large number of reactor technologies that can differ significantly, the fission reaction in a reactor occurs in the central region of a reactor called the reactor core. The reactor core typically contains several components as follows: Nuclear fuel. Nuclear reactors need fissile isotopes, such as uranium-235 and plutonium-239, to sustain chain reactions. Commercial reactors often use uranium ore that has been enriched in the isotope uranium-235 as their fissile fuel; the rest of the fuel consists of the non-fissile uranium-238. However, reactor operation will result in the conversion of some uranium-238 to the fissile isotope plutonium-239, which may then fission and contribute to power generation, and some reactor fuel may start with some of the uranium-235 mixed with plutonium-239 (sometimes referred to as a mixed oxide fuel). Fast reactors can also use spent fuel from other reactors as fuel and can be very effective at converting uranium-238 into plutonium-239. Some reactors can also utilize uranium-233 or thorium-232 as components of their fuel. Moderator. Thermal reactors use a moderator material to slow down, or thermalize, the fission neutrons in order to sustain the fission reaction. This is needed because neutrons produced by fission reactions are too fast (or energetic) to have a high likelihood of initiating a new fission reaction in fuel. Fast reactors are designed to utilize fast neutrons for the fission reactions and fuel breeding and, as such, do not use a moderator. Coolant. To remove heat from the core, a coolant—typically water, a gas, or liquid metal—is circulated through the core. The coolant both prevents the core from overheating (which could damage or melt the fuel) and it carries energy, in the form of heat, outside the core for electricity production, typically by generating steam that then drives a steam turbine. In some reactor types, the coolant can also function as the reactor’s moderator. Reaction control. Reactors can use different techniques to maintain the fission chain reaction at appropriate rates. For example, control rods may be inserted into reactor cores to absorb neutrons and slow down (or stop) the chain reaction, or neutron-absorbing materials, such as boric acid in pressurized water reactors, may be introduced to the coolant system to achieve a similar effect. Reactor technologies are classified as either thermal or fast reactors (although some technologies are “epithermal” and fall in between the two types) and by the materials used for the moderator or coolant. For example, a pressurized water reactor is a thermal reactor using water as both a coolant and moderator, and a gas-cooled fast reactor is a fast reactor using gas (carbon dioxide or helium) as a coolant. Table 2 lists and provides information about the reactor types that are either currently operating in the United States or are advanced reactor designs under consideration for development. In addition to the contacts named above, Ned Woodward (Assistant Director), John Barrett, Elizabeth Beardsley, John Delicath, R. Scott Fletcher, Cindy Gilbert, Michael Krafve, Tom Lombardi, Mehrzad Nadji, and Kiki Theodoropoulos made key contributions to this report.
|
NE conducts R&D on advanced nuclear reactor technologies with multiple aims, including (1) improving the economic competitiveness of nuclear technology to ensure that nuclear power continues to play a role in meeting our nation's energy needs; (2) increasing safety; (3) minimizing the risk of nuclear proliferation and terrorism; and (4) addressing environmental challenges, such as reducing greenhouse gas emissions. External groups have been critical of NE for, among other things, how it prioritizes advanced reactor R&D. GAO was asked to review NE's advanced reactor R&D efforts. This report (1) describes NE's approach to advanced nuclear reactor R&D and (2) examines how NE plans and prioritizes its advanced reactor R&D activities, including deploying an advanced reactor. GAO reviewed laws and reports concerning NE's efforts to develop advanced reactor technologies and interviewed NE officials and a nonprobability sample of companies developing such technology, selected because of their involvement with DOE's R&D efforts. The Department of Energy's (DOE) Office of Nuclear Energy's (NE) approach to advanced reactor research and development (R&D) focuses on three reactor technologies—high-temperature gas-cooled reactors, sodium-cooled fast reactors, and fluoride-salt-cooled high-temperature reactors—but NE is also funding research into other advanced reactor technologies. NE's approach is to conduct research in support of multiple advanced reactor technologies, while collaborating with industry and academia, with the ultimate goal for industry to take the results of NE's research to the next step of development and commercialization. This approach provides several advantages, including flexibility in responding to changes in future U.S. energy policy. Many representatives that GAO talked to from the nuclear power industry and the National Academy of Sciences agree with NE's approach, saying that current policies on controlling greenhouse gas emissions and disposing of nuclear waste do not make a compelling case for choosing a reactor technology to develop. However, others GAO talked to are critical of some of the reactor technologies NE chooses to research, citing economic and technological challenges. The Nuclear Energy Advisory Committee has criticized NE's approach, recommending that NE focus its efforts on a smaller number of technologies to help ensure that a reactor prototype is deployed. To remain aware of industry's R&D needs and international nuclear energy developments, NE regularly collaborates with industry and international organizations. NE uses internal and external reviews to set program and funding priorities for advanced reactor R&D activities and to evaluate progress toward program goals. For example, NE conducts internal monthly and quarterly reviews to discuss project status, budgets, and technical highlights. Furthermore, NE's R&D efforts are periodically reviewed by external entities, including the Nuclear Energy Advisory Committee. Among the advanced reactor technologies that NE's R&D currently supports, the high-temperature gas-cooled reactor is the technology that is most likely to be deployed and commercialized in the near term, according to an NE planning document. NE officials said this likelihood is based on the wide range of potential industry market applications and because of substantial government investments in the technology's development. NE has been pursuing this technology under the Next Generation Nuclear Plant (NGNP) Project, as established by the Energy Policy Act of 2005 (EPAct 2005). Under EPAct 2005, DOE is to deploy a prototype reactor for NGNP by the end of fiscal year 2021. However, in 2011, DOE decided not to proceed with the deployment phase of this project, citing several barriers. For example, NE and industry have been unable to reach an agreement on a cost-share arrangement to fund the deployment phase because of a disagreement on the applicable cost-share levels and how and when the cost-share would be applied to specific activities or project phases. Although NE continues to conduct R&D for the NGNP Project, it has not developed a strategy to overcome the cost-share issue and other barriers to resuming the deployment phase of the project. Furthermore, DOE has not selected initial reactor design parameters or reported to Congress on an alternative date for making this selection. Without doing so, it is not clear when NE is going to take this next step in deploying the NGNP prototype reactor and it risks the project not being completed by the targeted date in 2021. To better prepare DOE to meet the requirement of EPAct 2005 to deploy the NGNP prototype reactor, GAO recommends that DOE develop a strategy for resuming the NGNP Project and provide a report to Congress updating the status of the project. DOE agreed in principle with GAO's first recommendation and respectfully disagreed with the second. GAO believes these recommendations remain valid as discussed in the report.
|
DOD must be capable of rapidly deploying armed forces to respond to contingency and humanitarian operations around the world. Airlift and tanker aircraft play a vital role in providing this capability. Over the past 25 years, DOD has invested almost $141 billion to develop, procure, and modify its airlift and tanker forces with an additional investment planned for fiscal years 2007 through 2011 of $32 billion. Recent annual funding levels are at the highest levels in two decades. (See figure 1.) In December 2005, DOD issued a report on the study of its mobility capabilities. The goal of this Mobility Capabilities Study was to identify and quantify the mobility capabilities needed to support U.S. strategic objectives into the next decade. The MCS determined that the projected mobility capabilities are adequate to achieve U.S. objectives with an acceptable level of risk during the period from fiscal years 2007 through 2013; that is, the current U.S. inventory of aircraft, ships, prepositioned assets, and other capabilities are sufficient, in conjunction with host nation support. The MCS emphasized that continued investment in the mobility system, in line with current departmental priorities and planned spending, is required to maintain these capabilities in the future. This includes, for example, fully funding Army prepositioned assets as planned and completing a planned reengineering of the C-5 aircraft. In our previous reports concerning acquisition outcomes and best practices, we have noted the importance of matching warfighter requirements with available resources, a responsibility shared by the requirements and acquisition communities in DOD. As described in Air Force implementing guidance, there is within DOD a distinct separation between the requirements authority and acquisition authority. Under this guidance, this separation requires early and continued collaboration between both communities. Analyses done for the MCS contained methodological limitations that create concerns about the adequacy and completeness of the study and decision makers approving the KC-X tanker proposal lacked required analyses identifying need and associated risk for a passenger and cargo capability. While DOD used an innovative approach in conducting the study and acknowledged some methodological limitations in its report, it did not fully disclose how these limitations could affect the MCS conclusions and recommendations. In September 2006, we reported that DOD’s conclusions were based, in some instances, on incomplete data and inadequate modeling and metrics that did not fully measure stress on the transportation system, and that, in some cases, MCS results were incomplete, unclear, or contingent on further study, making it difficult to identify findings and evaluate evidence. It is not clear how the analyses done for the study supported DOD’s conclusions, and we suggested that decision makers exercise caution in using the results of this study to make programmatic decisions. As measured against relevant generally accepted research standards, we identified limitations in the MCS study and report that raise questions. Among our findings Aspects of modeling and data were inadequate in some areas because data were lacking and some of the models used could not simulate all relevant aspects of the missions. The report did not explain how these limitations could affect the study results or what the effect on the projected mobility capabilities might be. Relevant research standards require that models used are adequate for the intended purpose and represent a complete range of conditions, and also that data used are properly generated and complete. For example, the MCS modeled hypothetical homeland defense missions rather than missions for homeland defense demands from a well- defined and approved concept of operations for homeland defense because the specific details of the missions were still being determined, and DOD acknowledged that the data used may be incomplete. The MCS also was unable to model the flexible deterrent options/deployment order process to move units and equipment into theater due to lack of data, but the study assumed a robust use of this process, which in one scenario accounted for approximately 60 percent of the airlift prior to beginning combat operations. In addition, the MCS report contains more than 80 references to the need for improved modeling, and 12 of these references call for additional data or other refinements. Additionally, the MCS modeled the year 2012 to determine the transportation capabilities needed for the years 2007 through 2013. The year 2012 did not place as much demand for mobility assets in support of smaller military operations, such as peacekeeping, as other years. However, DOD officials considered 2012—the year modeled—as “most likely” to occur and stated that statistically it was not different from other years in the 2007 to 2013 period even though the number of smaller military operations is the least of any of the years reviewed. As I mentioned, we have reported before on the lack of data available for analysis that could benefit decision makers. In September 2005, we reported that the Air Force captured data on short tons transported but did not systematically collect and analyze information on operational factors, such as weather and runway length, that impact how much can be loaded on individual missions. Therefore, Air Force officials could not know how often it met its secondary goal to use aircraft capacity as efficiently as possible. Without this information, Air Mobility Command officials do not know the extent to which opportunities exist to use aircraft more efficiently and whether operational tempo, cost, and wear and tear on aircraft could be reduced. In addition, DOD officials do not have the benefit of such analysis to determine future airlift requirements for planning purposes. While the MCS concluded that combined U.S. and host nation transportation assets were adequate to meet U.S. objectives with acceptable risk, the report, in describing the use of warfighting metrics in its analyses, does not provide a clear understanding of the direct relationship of warfighting objectives to transportation capabilities. Acknowledging this point, the report stated that further analysis is required to understand the operational impact of increased or decreased strategic lift on achieving warfighting objectives. Relevant generally accepted research standards require that conclusions be supported by analyses. The use of warfighting metrics is a measure to determine whether combat tasks, such as achieving air superiority, are achieved. However, they do not measure whether appropriate personnel, supplies, and equipment arrived in accordance with timelines. As a result, we could not determine how the study concluded that planned transportation assets were adequate because the study did not contain a transparent analysis to support its conclusion or a clear roadmap in the report to help decision makers understand what that conclusion meant in terms of type and number of mobility assets needed. Previous DOD mobility studies primarily used mobility metrics, which measured success in terms of tons of equipment and personnel moved per day to accomplish military objectives. The use of both warfighting and mobility metrics to measure success would allow decision makers to know whether combat tasks were achieved and how much strategic transportation is needed to accomplish those tasks. In some cases, the MCS results were incomplete, unclear, or contingent on further study, making it difficult to identify findings and evaluate evidence. Relevant research standards require results to be presented in a complete, accurate, and relevant manner. For example, the report contains several recommendations for further studies and assessments, five of which are under way. However, at the time of our report, DOD had no plans to report the effect of these studies on the MCS results after the studies are complete. In addition, the report contains qualified information that is not presented clearly, such as varying assessments of intratheater assets in three different places in the report. The lack of clarity and conciseness of the reported results can limit the study’s usefulness to decision makers and stakeholders. The MCS report also made recommendations to conduct further studies, develop plans and strategies, and improve data collection and mobility models. In fact, DOD officials told us at the time that a Mobility Capabilities Study-2006 was underway, as well as studies on intratheater lift, aerial refueling, and other mobility issues. However, unless DOD addresses the concerns I just outlined for you, decision makers may be unable to clearly understand the operational implications of the study results and make fully informed programmatic investment decisions concerning mobility capabilities. Also, some of the underlying assumptions used in the MCS have now changed significantly, such as the assumption that Army prepositioned equipment is in place and fully funded, which will no longer be the case. Therefore, the MCS analyses and results, which would be the starting point for any new studies, may no longer be relevant. Mandatory Air Force policy requires Air Force organizations to use a formal capabilities-based approach to identify, evaluate, develop, field, and sustain capabilities that compete for limited resources. Contrary to mandatory Air Force implementing guidance, however, the Air Force proposal for a replacement refueling aircraft, the KC-X tanker, included a passenger and cargo capability without analyses identifying an associated gap, shortfall, or redundant capability. According to mandatory Air Force implementing guidance, analyses supporting the decision-making process should assess a capability based on the effects it seeks to generate and the associated operational risk of not having it. In this case, the supporting analyses determined neither need nor risk with regard to a passenger and cargo capability. Air Force officials could not provide supporting information sufficient to explain this discrepancy between the analyses and their proposal. Without sound analyses, the Air Force may be at risk of spending several billion dollars unnecessarily for a capability that may not be needed to meet a gap or shortfall. Military decision makers approved the passenger and cargo capability as a requirement although supporting analyses identified no need or associated risk. Mandatory Air Force implementing guidance states that senior leaders must use the documented results of analyses to confirm the identified capability requirement. The Air Force Requirements for Operational Capabilities Council validated, and the Chairman of the Joint Chiefs of Staff’s Joint Requirements Oversight Council validated and approved, KCX tanker proposal with a passenger and cargo capability. Following the approvals of the oversight councils, DOD plans to solicit proposals and award a contract for the KC-X tanker late in fiscal year 2007. At this time, the Under Secretary of Defense for Acquisition, Technology and Logistics, who supervises DOD acquisition, must certify, as Milestone Decision Authority for the proposed tanker acquisition, that, among other things, the Joint Requirements Oversight Council has accomplished its statutory duties and that the proposed program is in compliance with DOD policies and regulations. However, the absence of analyses identifying a capability gap, shortfall, or redundancy, and the Joint Requirements Oversight Council approval of the program without these analyses is contrary to policy and implementing guidance and could preclude certification of the program by the Under Secretary. Absent this certification, the acquisition program for the KC-X tanker cannot begin. In this report, we recommended that the Secretary of Defense direct the Secretary of the Air Force to accomplish the required analyses to evaluate the proposed passenger and cargo capability so as to determine if there is a gap, shortfall, or redundancy, assess the associated risk, and then submit such documentation to the Joint Requirements Oversight Council for validation. We also recommended that, once these analyses are completed, the Secretary of Defense direct the Chairman, Joint Chiefs of Staff, to formally notify the Under Secretary of Defense for Acquisition, Technology and Logistics that such analyses have been completed as required prior to certification of the program to Congress. DOD disagreed with our first recommendation to accomplish the required analyses. In its comments, DOD stated that through the Joint Capabilities Integration and Development System process, the Air Force presented analysis and rationale for the passenger and cargo capability. DOD further stated that its Joint Requirements Oversight Council and the Air Force concluded that the analysis was sufficient justification for the capability and the Joint Requirements Oversight Council validated the requirement. However, as our report points out, DOD did not perform the required analyses and failed to identify a gap, shortfall, or redundancy for the passenger and cargo capability. Considering the requirement for analyses that separate needs from wants and the risk of unnecessary expenditures in this multi-year multi-billion dollar acquisition program, we continue to believe that our recommendation has merit and that the analyses required by mandatory guidance are necessary to inform the decision that begins the acquisition. DOD agreed with our recommendation to formally notify the Under Secretary of Defense for Acquisition, Technology and Logistics once the required analyses have been completed. However, DOD did not offer assurance that the Air Force would accomplish the required analyses that evaluate the proposed passenger and cargo capability as we recommended, and then submit such documentation to the Joint Requirements Oversight Council for validation. We believe that the time it could take to accomplish the required analyses and submit the analyses for revalidation by the Joint Requirements Oversight Council, could delay the Under Secretary’s certification until just prior to the Milestone B decision, and may frustrate the congressional oversight that would otherwise be permitted under section 2366a. We believe that in a program committing $120 billion over several decades, the review confirming that needs are justified should occur as far in advance of program initiation as possible. In light of the DOD comments on our report, we have put forward a matter for congressional consideration. Specifically, we are suggesting that Congress consider requiring in addition to the certification described by section 2366a of title 10, United States Code, the Under Secretary of Defense for Acquisitions, Technology and Logistics make a specific certification that the Air Force employed a sound, traceable, and repeatable process producing analyses that determined if there is a gap, shortfall, or redundancy and assessed the associated risk with regard to passenger and cargo capability for the KC- 135 Recapitalization, and consistent with service policy, these analyses are made available to the Joint Requirements Oversight Council prior to the Under Secretary’s certification of the program pursuant to section 2366a of title 10, United States Code. The Air Force intends to replace the fleet of more than 500 tankers and the Mobility Capabilities Study of 2005 set the requirement for tankers at a range of between 520 to 640 aircraft. Replacement of this fleet is estimated to cost a minimum of $72 billion. Compared to a refueling aircraft without a passenger and cargo capability, the inclusion of the capability is estimated, according to the Analysis of Alternatives done for the KC-X tanker, to increase costs by 6 percent. The Joint Requirements Oversight Council approval of the proposal of a replacement tanker aircraft with the passenger and cargo capability, without an established need supported by analyses and without an analysis of risk, could result in an unnecessary expenditure of at least $4.3 billion by our estimates. In our August 1996 report, U.S. Combat Air Power: Aging Refueling Aircraft Are Costly to Maintain and Operate, we recommended consideration of a dual-use aircraft that could conduct both aerial refueling and airlift operations as a replacement for the KC-135. We recommended that the Secretary of Defense require that future studies and analyses of replacement airlift and tanker aircraft consider accomplishing the missions with a dual-use aircraft. DOD only partially concurred with this recommendation, expressing concern at that time about how a dual- use aircraft would be used and whether one mission area might be degraded to accomplish the second mission. The lack of analyses done to support the current proposal still does not give DOD officials information about how a dual-use aircraft would be used or whether the primary mission of aerial refueling would be degraded. Over the past 25 years, DOD has invested more than $140 billion on its airlift and tanker forces. Success for acquisitions requires sound decisions to ensure that program investments are getting promised returns—on time deliveries to the field, predictable costs, and sufficient capability. We have reviewed four major airlift programs and found they did not meet delivery schedules and were over cost. These programs did not involve huge technological leaps but presented significant design challenges to integrate new systems into the older aircraft. A consistent problem plaguing the programs was an insufficient job of analyzing the requirements and resources at the programs’ outset, a key systems engineering activity. The divergence between these programs’ experience and best product development practices are contributing factors to their outcomes. We assessed four airlift programs as part of our annual assessment of DOD’s major acquisition programs and each has experienced cost growth and schedule delays. Despite being based largely on low technological risks involving mature systems, these programs have failed to deliver on the business cases that justified their initial investment. DOD estimates it will need over $12 billion between 2007 and 2013 to develop, modify, or procure these aircraft. The specific airlift programs include The Air Force’s C-5 Avionics Modernization Program (AMP) is intended to improve the mission capability rate and transport capabilities, as well as reduce ownership costs by incorporating global air traffic management, navigation and safety equipment, modern digital equipment, and an all- weather flight control system. The Air Force’s C-5 Reliability Enhancement and Reengining Program (RERP) is intended to enhance the reliability, maintainability, and availability of the C-5 through engine replacements and modifications to subsystems such as the electrical and fuel subsystems. The C-5 aircraft will require installation of the AMP capabilities before the aircraft engines can be replaced. The Air Force’s C-130 Avionics Modernization Program (AMP) is intended to standardize the cockpit configurations and avionics of different models of C-130 aircraft by providing such things as communication and navigational system upgrades, terrain avoidance and warning system, dual flight management systems, and new data links. The C-130J, the latest model of the C-130 aircraft series, is designed primarily for the transport of cargo and personnel within a theater of operation. Variants of the C-130J are being acquired by the Air Force (e.g., Air Mobility Command and Special Operations Command), Marine Corps, and Coast Guard. Each of these programs has experienced problems that have impacted cost and schedule (see table 1). The net effect of the outcomes to date is that DOD is now paying more to modify or acquire these systems and the warfighter has had to wait longer than initially planned before new capability is delivered. For example, the Air Force now expects by 2011 to have completed the modification of about 135 fewer C-130 airlift aircraft when compared to its plan 2 years ago. We anticipate there could be additional cost increases and schedule delays reported in the future. For example, the C-130 AMP fiscal year 2008 budget indicates that the total program costs have increased almost $700 million and planned quantities have been reduced from 434 units to 268 units— nearly doubling the program acquisition unit costs since December 2005. The program recently notified Congress of a critical Nunn-McCurdy breach concerning its unit cost increases. The budget also shows the Air Force plans to fund the modification of 110 C-5 aircraft with AMP improvements instead of 59 aircraft as stated in last year’s budget. According to C-5 RERP program officials, total program costs are expected to increase due to costs with the engine, pylons, and labor. Over the last several years, we have undertaken a body of work that examines weapon acquisition issues from the perspective that draws upon lessons learned from best commercial practices for product development. We have found that a key to successful product development is the formulation of a business case that provides demonstrated evidence that (1) the warfighter need exists and that it can best be met with the chosen concept and (2) the concept can be developed and produced within existing resources—including proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when needed. The business case is then executed through an acquisition process that is anchored in knowledge. Leading firms ensure a high level of knowledge is achieved at key junctures in development, characterized as knowledge points described below: Knowledge point 1: A match must be made between the customer’s needs and the developer’s available resources—technology, engineering knowledge, time, and funding—before a program starts. Knowledge point 2: The product’s design must be stable and must meet performance requirements before beginning system demonstration. This is primarily evidenced by the release of 90 percent of the design drawings by the critical design review and successful system integration. Knowledge point 3: The product must be producible within cost, schedule, and quality targets and demonstrated to work as intended before production begins. There is a synergy in this process, as the attainment of each successive knowledge point builds on the preceding one. We have found that if the knowledge based acquisition concept is not applied, a cascade of negative effects becomes magnified in the product development and production phases of an acquisition program leading to cost increases and schedule delays, poor product quality and reliability, and delays in getting new capability to the warfighter (see figure 2). DOD programs often do not capture sufficient knowledge by critical junctures but decide to move forward regardless. The airlift systems we reviewed were not immune to this condition and have experienced unnecessary cost growth and schedule delays as a result. While we do not have in-depth knowledge on the specific details for these programs, we do have a broad understanding of the basic underpinnings that led to the problems. All of the programs were considered low technological risks by DOD because they planned to rely extensively on proven commercial and modified off the shelf technology for its new capabilities. However, these acquisitions have turned out to be more difficult than expected. The programs did not follow sound systems engineering practices for analyzing requirements and for ensuring a well integrated design at the right time. As a result, each program has encountered some difficulty in achieving design and production maturity as the program moved forward. Some of the causes to problems encountered include Failing to fully analyze the resources needed to integrate proven commercial technologies and subsystems into a military system before initiating development. Not achieving a stable design before beginning system demonstration phase resulting in costly design changes and rework. Failing to demonstrate the aircraft would work as required before making large production investments. In all these instances where appropriate knowledge was not captured before moving forward, the impact has resulted in a predictable need for additional resources as shown below in specific airlift programs. The C-5 AMP entered production without demonstrating that the system worked as intended and was reliable. The program entered production just 2 months after flight testing started and ran into significant design problems while trying to complete development. Problems uncovered after flight test began required modifications to the aircraft design which increased by 50 percent the number of engineering drawings needed for the system. Addressing these problems delayed the initial operational capability by a year and contributed to the significant growth in the program’s unit costs. Even today, 4 years after production was initiated, performance concerns remain with the C-5 AMP. The Director of Operational Test and Evaluation recently reported that the C-5 AMP is not operationally suitable because of high component failure rates, inadequate diagnostics systems, and low reliability rates. The C-5 RERP did not demonstrate design stability before entering the system demonstration phase which resulted in rework and schedule delays. At the time the program entered system demonstration, program officials believed that they had released 90 percent of the design drawings but had not successfully demonstrated that the subsystems could be integrated onto the C-5 aircraft. During system integration activities the program found that the “pylon/thrust reverser” had to be redesigned to address overweight conditions and safety concerns. The program’s design efforts have also been hampered by the fact that its success is dependent upon the success of the C-5 AMP program. Presently, according to test officials, the C-5 AMP design is not mature enough to provide a baseline design for the RERP efforts. These design issues have contributed to an increase in costs and a 2-year delay in delivering an initial operational capability. The C-130 AMP began development in 2001 without a clear understanding of the resources needed to integrate proven commercial technologies into a military system. According to the program office, they clearly underestimated the complexity of the engineering efforts that were needed to modify the different models of the C-130. At the critical design review held in 2005—the point that the design is expected to be stable and ready to begin the system demonstration phase—the program had not proven that the subsystems and components could be successfully integrated into the product.Upon integrating the new avionics into the test aircraft, program officials realized that it had significantly underestimated (by 400 percent) the amount of wiring and the number of harnesses and brackets needed for the installation. As a result, the design had to be reworked, delaying the delivery of the test aircraft and increasing costs. The Air Force procured the C-130J without assurances that the aircraft would work as intended. Program officials believed the design was mature when procurement began in 1996, largely because the C-130J evolved from earlier models and was offered as a commercial item. However, the C- 130J has encountered numerous deficiencies that had to be corrected in order to meet the minimum warfighter requirements delaying the initial aircraft delivery to the warfighter by about 1.5 years. DOD testing officials still report performance issues with the aircraft resulting in it being rated as partially mission capable. The performance issues involve the aircraft’s ability to meet its airdrop operations requirements, its effectiveness in non-permissive threat environments, and maintainability issues. Program officials plan to address the deficiencies as part of a C-130J modernization effort. As we said at the beginning, our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things the right way. This involves making tough tradeoff decisions as to which programs should be pursued, and, more importantly, not pursued, making sure programs are executable, establishing and locking in needed requirements before programs are ever started, and making it clear who is responsible for what and holding people accountable when these responsibilities are not fulfilled. Recognizing this, DOD has tried to embrace best practices in its policies, as well as taking many other actions. However, DOD still has trouble distinguishing between wants and needs. Because of our concerns about the analyses done for both the MCS, which has broad implications for DOD’s mobility needs, and the KC-X tanker requirements, we would urge Congress and other decision makers to exercise caution when making airlift and tanker investment decisions. DOD will continue to face challenges in modernizing its forces with new demands on the federal dollar created by changing world conditions. Consequently, it is incumbent upon DOD to find and adopt best product development practices that can allow it to manage its weapon system program in the most efficient and effective way. Success over the long term will depend on following knowledge-based acquisition practices as well as DOD leadership’s commitment to improving outcomes. The four acquisition cases we cite in this testimony are not atypical for all programs. Even with no major technological invention necessary to meet the warfighters needs in these cases, acquisition outcomes are not good. There are consequences to these outcomes. The warfighter does not receive needed capability on time and the Department and Congress must spend additional unplanned money to correct mistakes—an expense they can ill afford. A knowledge-based product development process steeped in best practices from systems engineering can solve many of these problems before they start. DOD knows how to do this and, in fact, informs its acquisition policy with systems engineering rules. It should redouble its efforts to drive these policies into practice. Mr. Chairman and members of the Subcommittee, this concludes our prepared statement. We would be pleased to answer any questions you may have. For further information about this statement, please contact William M. Solis at 202-512-8365 or [email protected] or Michael J. Sullivan at 202-512- 4841 or [email protected]. Contact points for Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this testimony include Marie Ahearn, Ann Borseth, Cheryl Andrew, Claudia Dickey, Mike Hazard, Matthew Lea, Oscar Mardis, Sean Merrill, Karen Thornton, and Steve Woods. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006 Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-06-391. Washington, D.C.:March 31, 2006. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Defense (DOD) has continuing efforts to modernize its airlift and tanker fleets by investing billions of dollars to modify legacy airlift systems, such as the C-5 and C-130, and procure new aircraft, such as a tanker replacement. Acquisition has been on GAO's list as a high risk area since 1990. GAO has reported that elements contributing to a sound business case for an acquisition are missing or incomplete as DOD and the services attempt to acquire new capabilities. Those elements include firm requirements, mature technologies, a knowledge-based acquisition strategy, a realistic cost estimate, and sufficient funding. Acquisition problems that include failure to limit cost growth, schedule delays, and quantity reductions persist, but fiscal realities will not allow budgets to accommodate these problems any longer. Today's testimony addresses (1) the analyses supporting the Department of Defense's (DOD) mobility capabilities and requirements and (2) actions that are needed to improve the outcomes of weapon system acquisitions. For this testimony, GAO drew from issued reports, containing statements of the scope and methodology used, as well as recently completed work not yet reported. GAO's work was performed in accordance with generally accepted government auditing standards. Past GAO reports, including two recently issued, raise concerns about the quality of analyses underpinning the programmatic decision-making surrounding DOD's airlift requirements. In September 2006, GAO issued our report (GAO-06-938) on DOD's Mobility Capabilities Study (MCS). The MCS determined that the projected mobility capabilities are adequate to achieve U.S. objectives with an acceptable level of risk during the period from fiscal years 2007 through 2013; that is, the current U.S. inventory of aircraft, ships, prepositioned assets, and other capabilities are sufficient, in conjunction with host nation support. GAO's report stated that conclusions of the MCS were based on incomplete data and inadequate modeling and metrics that did not fully measure stress on the transportation system. GAO further observed that the MCS results were incomplete, unclear, or contingent on further study, making it difficult to identify findings and evaluate evidence. It was not clear how the analyses done for the study support DOD's conclusions and GAO suggested that decision makers exercise caution in using the results of this study to make programmatic decisions. In March 2007, GAO reported (GAO-07-367R) on the lack of mandatory analyses to support a passenger and cargo capability for the new replacement refueling aircraft, the KC-X tanker. Contrary to mandatory Air Force implementing guidance, the Air Force proposed a capability without analyses identifying an associated gap, shortfall, or redundancy. GAO believes that without sound analyses, the Air Force may be at risk of spending several billion dollars unnecessarily for a capability that may not be needed to meet a gap or shortfall and made recommendations to the Secretary of Defense that included conducting the requiring analyses necessary to establish capabilities. Successful acquisition programs make sound decisions based on critical product knowledge to ensure that program investments are getting promised returns--on time delivery, within estimated costs, and with expected capabilities. However, GAO has shown in its work that DOD practices diverge from best development practices intended to produce good outcomes and, as a result, have experienced significant cost growth and schedule delays. DOD expects to invest over $12 billion in new and improved capabilities in four airlift programs discussed in this testimony between now and 2013--C-5 Avionics Modernization Program, C-5 Reliability Enhancement and Reengining Program, C-130 Avionics Modernization Program, and the C-130J acquisition program. GAO found that all four programs failed at basic systems engineering practices to 1) fully analyze the resources needed to integrate proven commercial technologies, 2) achieve a stable design before beginning system demonstration, and 3) demonstrate the aircraft would work as required before making large production investments.
|
In 2007, VA established the VCL, a 24-hour crisis line staffed by responders trained to assist veterans in emotional crisis. Through an interagency agreement, VA collaborated with SAMHSA to use a single, national toll-free number for crisis calls that serves both Lifeline and the VCL. Through this interagency agreement, VA and SAMHSA set out to establish a seamless crisis-management system through a collaborative and cooperative relationship between the agencies that would provide consistent suicide-prevention techniques to callers. The national toll-free number presents callers with choices. Callers are greeted by a recorded message that explains the function of the crisis line and prompts individuals to press “1” to reach the VCL. Callers who do not press “1” by the end of the message are routed to one of Lifeline’s 164 local crisis centers. All callers who press “1” are routed first to the VCL primary center. Calls that are not answered at the VCL primary center within 30 seconds of the time that the caller presses “1” during the Lifeline greeting are automatically routed to one of five VCL backup call centers. If a call is not answered by the VCL backup call center that initially receives it, the call may be sent to another VCL backup call center. VA entered into a contract with a firm to oversee the operations of the VCL backup call centers. There are a total of 164 Lifeline local crisis centers, 5 of which also serve the VCL. (See fig. 1.) The number of calls reaching the VCL has increased substantially since the VCL’s first full year of operation. Increases in the number of VCL calls received have corresponded with increased annual funding obligations for the VCL. (See fig. 2.) VA added online chat and text message capabilities to the VCL in fiscal years 2009 and 2012, respectively. The number of online chats and text messages handled by the VCL generally increased every year, though the number of online chats decreased in fiscal year 2015. (See fig. 3.) To determine how well VA performed against its goal for responding to VCL callers, we covertly tested the VCL’s call response time in July and August 2015. During this testing we found that it was uncommon for VCL callers to wait an extended period before reaching a responder since all of our calls that reached the VCL were answered in less than 4 minutes. According to VA officials, VA established a goal that the VCL primary center would answer 90 percent of calls to the VCL within 30 seconds. Our test included a generalizable sample of 119 test calls that can be used to describe all callers’ wait times when calling the VCL during this period. On the basis of our test calls, we estimate that during July and August 2015 about 73 percent of all VCL calls were answered at the VCL primary center within 30 seconds. (See fig. 4.) VA officials told us that, during fiscal year 2015, about 65 to 75 percent of VCL calls were answered at the VCL primary center and about 25 to 35 percent of VCL calls were answered at the backup call centers. These VA-reported results indicate that about 65 to 75 percent of VCL calls were answered within either 30 or 60 seconds. These results are consistent with our test results for July and August 2015. According to VA officials, VA attempts to maximize the percentage of calls answered at the VCL primary center because these responders have additional resources— including access to veterans’ VA electronic medical records—that are unavailable to VCL backup call center responders. All responders receive specialized training to assist callers in crisis. To improve its performance toward meeting the goal of answering 90 percent of calls at the VCL primary center within 30 seconds, VA implemented two changes in fiscal year 2015—namely, staggered work shifts for responders and new call-handling procedures. Staggered work shifts. VA implemented staggered shifts for responders at the VCL primary center on September 6, 2015. Staggered shifts are work schedules that allow employees to start and stop their shifts at different times as a way to ensure better coverage during peak calling periods. Specifically, it helps schedule more employees to work when call volume is highest and fewer employees to work when call volume is lowest. Additionally, staggered shifts help limit disruptions in service as responders begin and end their shifts. By comparing VCL telephone call data from September through December of 2014 to that of September through December of 2015, we found that VA’s implementation of staggered shifts at the VCL primary center had mixed results. Overall: The average percentage of calls answered per hour at the VCL primary center from September through December 2015—after staggered shifts were implemented—was 75 percent, slightly less than the average of 79 percent answered during the corresponding period in 2014 before staggered shifts were implemented. However, the VCL received an average of about 1.3 more calls per hour during this period in 2015 than it received during the corresponding period in 2014 and, according to VA officials, the VCL primary center employed fewer responder staff in 2015 than 2014. By day of the week: The average percentage of calls answered per hour at the VCL primary center increased on Mondays to 89 percent and Tuesdays to 83 percent after VA implemented staggered shifts, up from 78 percent and 79 percent, respectively, during the corresponding period in 2014. These increases suggest that staggered shifts may have helped VA answer more calls at the VCL primary center on these days because VCL call data from our analysis indicate that these days of the week typically experienced the highest number of calls prior to implementing staggered shifts, and VA officials told us that they used the implementation of staggered shifts to schedule more responders on these days. However, the average percentage of calls answered per hour at the VCL primary center decreased on Saturdays to 61 percent and Sundays to 70 percent after VA implemented staggered shifts, down from 78 percent and 80 percent, respectively, during the corresponding period in 2014. By hours of the day: VA answered a higher percentage of calls at the VCL primary center during the mid-day and evening hours after the implementation of staggered shifts. Specifically, from 11:00 a.m. to 4:00 p.m. and from 9:00 p.m. to 11:00 p.m. the VCL primary center answered a higher percentage of calls compared with the corresponding periods in 2014. However, VA answered a lower percentage of calls at the VCL primary center during overnight hours—midnight to 9:00 a.m.—and in the early evening—5:00 p.m. to 8:00 p.m.—compared to corresponding periods in 2014. To address staffing limitations and align the number of responders available for each staggered shift according to demand, VA officials said that VA planned to hire 63 additional responders for the VCL primary center in fiscal year 2016 and assign these new responders to weekend and evening shifts. This change would likely help improve the mixed results we identified in our analysis of VA’s implementation of staggered shifts for responders. As of February 2016, VA officials said that 22 applicants had accepted employment offers and that VA planned to extend employment offers to an additional 15 applicants. These officials also noted that recent attrition at the VCL primary center was largely due to VCL primary center responders being promoted into new positions at the VCL primary center or to VCL primary center responders leaving because their work with the VCL did not qualify as clinical hours required for licensure in their specialties. New call-handling procedures. VA implemented new call handling procedures at the VCL primary center beginning in June 2015 that provided responders with specific guidance to more efficiently handle “noncore” callers—those callers who were not seeking crisis assistance but rather seeking help with other issues, such as help with veterans’ benefits questions. For example, if a caller reached the VCL with a question about VA disability benefits, the VCL primary center responder would verify that the caller was not in crisis and transfer the caller to the Veterans Benefits Administration to address the question. VCL telephone call data provided by VA suggest that the average time VCL primary center responders spend handling noncore calls decreased by approximately 30 percent over a 5-month period after responder training began on these new call-handling procedures. We would expect that as the average time VCL primary center responders spend handling noncore calls decreases, these responders should have more time available to answer additional incoming calls. To determine the timeliness of the VCL’s responses to online chats and text messages, we conducted a covert test in July and August 2015 using nongeneralizable samples of 15 online chats and 14 text messages. All 15 of our test online chats received responses within 60 seconds, 13 of which were within 30 seconds. This result is consistent with VA data that indicate VCL responders sent responses to over 99 percent of online chat requests within 1 minute in fiscal years 2014 and 2015. VA officials said that all online chats are expected to be answered immediately. Although this is an expectation, VA does not yet have formal performance standards for how quickly responders should answer online chat requests and expects to develop them before the end of fiscal year 2016. However, our tests of text messages revealed a potential area of concern. Four of our 14 test text messages did not receive a response from the VCL. Of the remaining 10 test text messages, 8 received responses within 2 minutes, and 2 received responses within 5 minutes. VA officials stated that text messages are expected to be answered immediately, but, as with online chats, VA has not yet developed formal performance standards for how quickly responders should answer text messages. VA data indicate that VCL responders sent responses to 87 percent of text messages within 2 minutes of initiation of the conversation in both fiscal years 2014 and 2015. VA officials said that VA plans to establish performance standards for answering text messages before the end of fiscal year 2016. VA officials noted and we observed during a site visit that some incoming texts were abusive in nature or were not related to a crisis situation. According to VA officials, in these situations, if this is the only text message waiting for a response, a VCL responder will send a response immediately. However, if other text messages are awaiting responses, VA will triage these text messages and reply to those with indications of crisis first. This triage process may have contributed to the number of our test texts that did not receive responses within 2 minutes. The VCL’s text messaging service provider offered several reasons for the possible nonresponses that we encountered in our test results. These included: (1) incompatibilities between some devices used to send text messages to the VCL and the software VA used to process the text messages, (2) occasional software malfunctions that freeze the text messaging interface at the VCL primary center, (3) inaudible audio prompts used to alert VCL primary center responders of incoming text messages, (4) attempts by people with bad intentions to disrupt the VCL’s text messaging service by overloading the system with a large number of texts, and (5) incompatibilities between the web-browsers used by the VCL primary center and the text messaging software. VA officials told us that they do not monitor and test the timeliness and performance of the VCL text messaging system, but rather rely solely on the VCL’s text messaging service provider for such monitoring and testing. They said that the provider had not reported any issues with this system. According to the provider, no routine testing of the VCL’s text messaging system is conducted. Standards for internal control in the federal government state that ongoing monitoring should occur in the course of normal operations, be performed continually, and be ingrained in the agency’s operations. Without routinely testing its text messaging system, or ensuring that its provider tests the system, VA cannot ensure that it is identifying limitations with its text messaging service and resolving them to provide consistent, reliable service to veterans. VA has sought to enhance its capabilities for overseeing VCL primary center operations through a number of activities—including establishing a call center evaluation team, implementing revised performance standards for VCL primary center responders, implementing silent monitoring of VCL primary center responders, and analyzing VCL caller complaints. Establishment of a call center evaluation team. In October 2013, VA established a permanent VCL call center evaluation team that is responsible for monitoring the performance of the VCL primary center. The call center evaluation team analyzes VCL data, including information on the number of calls received and the number of calls routed to backup call centers from the primary center. VA officials told us that they use these data to inform management decisions about VCL operations. For example, these data were used as part of its decision to implement staggered shifts for VCL primary center responders in an attempt to increase the number of calls answered at the VCL primary center. Implementation of revised performance standards for VCL primary center responders. In October 2015, VA implemented new performance standards for all VCL primary center responders that will be used to assess their performance in fiscal year 2016. According to VA officials, these performance standards include several measures of responder performance—such as demonstrating crisis-intervention skills, identifying callers’ needs, and addressing those needs in an appropriate manner using VA approved resources. VA officials told us that by the summer of 2016 VCL primary center supervisors will have access to real-time information on VCL primary center responders’ performance against these standards and can track their workload and performance periodically. These officials explained that they anticipate these performance standards will be reviewed and revised as needed for the fiscal year 2017 performance year. Silent monitoring of VCL primary center responders. In February 2016, VA officials reported that they were beginning silent monitoring of all VCL responders using recently developed standard operating procedures, standard data collection forms, and standard feedback protocols. These officials explained that the VCL primary center silent monitoring would begin in mid-February 2016 with four VA medical center–based suicide-prevention coordinators completing silent monitoring of 15 to 20 calls a week to the VCL primary center through March 2016. These officials explained that six full-time silent monitors had been hired as part of the VCL quality assurance staff and would begin conducting silent monitoring of VCL primary center calls in April 2016 once their training had been completed. During the initial rollout, the four VA medical center–based suicide-prevention coordinators will remotely access VCL primary center calls, complete the standard data collection form, and send the information to the observed VCL primary center responders’ supervisors for feedback delivery. Once the six full- time silent monitors begin completing these activities, they will complete all call monitoring and deliver feedback to VCL primary center responders and will coordinate with VCL primary center supervisors on an as-needed basis. VA officials explained in February 2016 that they were unsure how many VCL primary center calls these six full-time silent monitors would be able to observe and will clarify this expectation once these silent monitors begin their duties in April 2016. Analysis of VCL caller complaints. In October 2014, VA created a mechanism for tracking complaints it receives from VCL callers and external parties, such as members of Congress and veterans, about the performance of the VCL primary and backup call centers. Complaints can be about services provided by either the VCL primary center or one of the VCL backup call centers. In fiscal year 2015, the VCL received over 200 complaints from veterans and others regarding call center operations. These complaints included issues with VCL primary center and backup call center customer service and wait times to reach a responder. According to VA officials, each complaint is investigated to validate its legitimacy and determine the cause of any confirmed performance concerns. This validation process includes speaking with the complainant and VA staff, as applicable. The results and disposition of each complaint are documented in VA’s complaint tracking database. For complaints that include details on specific responders, VA officials told us that they investigate complaints and use legitimate complaints as part of the performance evaluation process for VCL primary center responders. Specifically, these officials explained that when a complaint about a VCL primary center responder’s customer service is verified as accurate by a VA psychologist or supervisor after it is investigated, it can affect a VCL primary center responder’s annual performance appraisal. The investigation process also includes verifying any associated documentation of the activities at the source of the complaint. In 2011, VA established key performance indicators to evaluate VCL primary center operations; however, we found these indicators did not have established measureable targets or time frames for their completion. VCL key performance indicators lack measurable targets. We found that VA’s list of VCL key performance indicators did not include information on the targets the department had established to indicate their successful achievement. For example, VA included a key performance indicator for the percentage of calls answered by the VCL in this list but did not include information on what results would indicate success for (1) the VCL as a whole, (2) the VCL primary center, or (3) the VCL backup call centers. As another example, VA did not establish targets for the percentage of calls abandoned by callers prior to speaking with VCL responders. Measureable targets should include a clearly stated minimum performance target and a clearly stated ideal performance target. These targets should be quantifiable or otherwise measurable and indicate how well or at what level an agency or one of its components aspires to perform. Such measurable targets are important for ensuring that the VCL call center evaluation team can effectively measure VCL performance. VCL key performance indicators lack time frames for their completion. We found that VA’s list of VCL key performance indicators did not include information on when the department expected the VCL to complete or meet the action covered by each key performance indicator. For example, for VA’s key performance indicator for the percentage of calls answered by the VCL, the department did not include a date by which it would expect the VCL to complete this action. As another example, VA did not establish dates by which it would meet targets yet to be established for the percentage of calls abandoned by callers prior to speaking with VCL responders. Time frames for action are a required element of performance indicators and are important to ensure that agencies can track their progress and prioritize goals. Guidance provided by the Office of Management and Budget states that performance goals—similar to VA’s key performance indicators for the VCL—should include three elements: (1) a performance indicator, which is how the agency will track progress; (2) a target; and (3) a period. VA officials reported that they are currently implementing a comprehensive process improvement plan, discussed later in this report, that will help ensure the right structures and processes are in place, which they believe are logical precursors to examining VCL outcomes and establishing targets and time frames for performance indicators. Without establishing targets and time frames for the successful completion of its key performance indicators for the VCL, VA cannot effectively track and publicly report progress or results for its key performance indicators for accountability purposes. VA’s backup call coverage contract, awarded in October 2012 and in place at the time of our review, did not include detailed performance requirements in several key areas for the VCL backup call centers. Clear performance requirements for VCL backup call centers are important for defining VA’s expectations of these service partners. However, VA has taken steps to strengthen the performance requirements of this contract by modifying it in March 2015 and beginning the process of replacing it with a new contract. October 2012 backup call coverage contract. This contract provided a network of Lifeline local crisis centers that could serve as VCL backup call centers managed by a contractor. This contractor was responsible for overseeing and coordinating the services of VCL backup call centers that answer overflow calls from the VCL primary center. This contract as initially awarded included few details on the performance requirements for VCL backup call centers. For example, the contract did not include any information on several key aspects of VCL backup call center performance, including: (1) the percentage of VCL calls routed to each VCL backup call center that should be answered, (2) VA’s expectations on whether or not VCL backup call centers could use voice answering systems or caller queues for VCL calls, and (3) VA’s documentation requirements for VCL calls answered at the VCL backup call centers. Detailed performance requirements on these key aspects of VCL backup call center performance are necessary for VA to effectively oversee the performance of the contractor and the VCL backup call centers. By not specifying performance requirements for the contractor on these key performance issues, VA missed the opportunity to validate contractor and VCL backup call center performance and mitigate weaknesses in VCL call response. For example, representatives from one VCL backup call center provided data that showed that the backup call center answered about 50 percent of the VCL calls it received. However, without a performance requirement establishing a standard for the percentage of calls each VCL backup call center should answer, VA could not determine whether this was acceptable performance for a VCL backup call center. As of December 2015, this VCL backup call center reported that it had improved its performance and answered about 66 percent of calls it received from July 2015 to December 2015. VA officials told us about several concerns with the performance of the backup call centers operating under the October 2012 contract based on their own observations and complaints reported to the VCL. These concerns included the inconsistency and incompleteness of VCL backup call centers’ responses to VCL callers, limited or missing documentation from records of VCL calls answered by VCL backup call center responders, limited information provided to VA that could be used to track VCL backup call center performance, and the use of voice answering systems or extended queues for VCL callers reaching some VCL backup call centers. For example, VA officials reported that some veterans did not receive complete suicide assessments when their calls were answered at VCL backup call centers. In addition, VA officials noted that they had observed some VCL backup call centers failing to follow VCL procedures, such as not calling a veteran who may be in crisis when a third-party caller requested that the responder contact the veteran. According to VA officials, these issues led to additional work for the VCL primary center, including staffing one to two responders per shift to review the call records submitted to the VCL primary center by backup call centers and to determine whether these calls required additional follow-up from the VCL primary center. These officials estimated that 25 to 30 percent of backup call center call records warranted additional follow-up to the caller from a VCL primary center responder, including approximately 5 percent of backup call center call records that needed to be completely reworked by a VCL primary center responder. March 2015 backup call coverage contract modification. Given these concerns, in March 2015 VA modified the October 2012 backup call coverage contract to add more explicit performance requirements for its backup call coverage contractor, which likely took effect more quickly than if the department had waited for a new contract to be awarded. These modified requirements included (1) the establishment of a 24- hours-a-day, 7-days-a-week contractor-staffed emergency support line that VCL backup call centers could use to report problems, (2) a prohibition on VCL backup call centers’ use of voice answering systems, (3) a prohibition on VCL backup call centers placing VCL callers on hold before a responder conducted a risk assessment, (4) documentation of each VCL caller’s suicide risk assessment results, and (5) transmission of records for all VCL calls to the VCL primary center within 30 minutes of the call’s conclusion. Development of new backup call coverage contract. In July 2015, VA began the process of replacing its backup call coverage contract by publishing a notice to solicit information from prospective contractors on their capability to satisfy the draft contract terms for the new contract; this new backup call coverage contract was awarded in April 2016. We found that these new proposed contract terms included the same performance requirement modifications that were made in March 2015, as well as additional performance requirements and better data reporting from the contractor that could be used to improve VA’s oversight of the VCL backup call centers. Specifically, the proposed contract terms added performance requirements to address VCL backup call center performance—including a requirement for 90 percent of VCL calls received by a VCL backup call center to be answered by a backup call center responder within 30 seconds and 100 percent to be answered by a backup call center responder within 2 minutes. In addition, the proposed contract terms include numerous data reporting requirements that could allow VA to routinely assess the performance of its VCL backup call centers and identify patterns of noncompliance with the contract’s performance requirements more efficiently and effectively than under the prior contract. The proposed terms for the new contract also state that VA will initially provide and approve all changes to training documentation and supporting materials provided to VCL backup call centers in order to promote the contractor’s ability to provide the same level of service that is being provided by the VCL primary center. We found that when callers do not press “1” during the initial Lifeline greeting, their calls may take longer to answer than if the caller had pressed “1” and been routed to either the VCL primary center or a VCL backup call center. As previously discussed, VA and SAMHSA collaborated to link the toll-free numbers for both Lifeline and the VCL through an interagency agreement. The greeting instructs callers to press “1” to be connected to the VCL; if callers do not press “1,” they will be routed to one of SAMHSA’s 164 Lifeline local crisis centers. To mimic the experience of callers who do not press “1” to reach the VCL when prompted, we made 34 covert nongeneralizable test calls to the national toll-free number that connects callers to both Lifeline and the VCL during August 2015 and we did not press “1” to be directed to the VCL. For 23 of these 34 calls, our call was answered in 30 seconds or less. For 11 of these calls, we waited more than 30 seconds for a responder to answer— including 3 calls with wait times of 8, 9, and 18 minutes. Additionally, one of our test calls did not go through, and during another test call we were asked if we were safe and able to hold. VA’s policy prohibits VCL responders from placing callers on hold prior to completing a suicide assessment; Lifeline has its own policies and procedures. According to officials and representatives from VA, SAMHSA, and the VCL backup call centers, as well as our experience making test calls where we did not press “1,” there are several reasons why a veteran may not press “1” to be routed to the VCL, including an intentional desire to not connect with VA, failure to recognize the prompt to press “1” to be directed to the VCL, waiting too long to respond to the prompt to press “1” to be directed to the VCL, or calling from a rotary telephone that does not allow the caller to press “1” when prompted. VA officials said they had not estimated the extent to which veterans intending to reach the VCL did not press “1” during the Lifeline greeting. These officials explained that their focus has been on ensuring that veterans who did reach the VCL received appropriate service from the VCL primary center and backup call centers. In addition, SAMHSA officials said that they also do not collect this information. These officials reported that SAMHSA does not require the collection of demographic information, including veteran status, for a local crisis center to participate in the Lifeline network. However, they noted that SAMHSA could request through its grantee that administers the Lifeline network that local crisis centers conduct a onetime collection of information to help determine how often and why veterans reach Lifeline local crisis centers. SAMHSA officials explained that they could work with the Lifeline grantee to explore optimal ways of collecting this information that would be (1) clinically appropriate, (2) a minimal burden to callers and Lifeline’s local crisis centers, and (3) in compliance with the Office of Management and Budget’s paperwork reduction and information collection policies. The interagency agreement between VA and SAMHSA assigns SAMHSA responsibilities for monitoring the use of the national toll-free number— 1-800-273-TALK (8255)—that is used to direct callers to both the VCL and Lifeline. These responsibilities include monitoring the use of the line, analyzing trends, and providing recommendations about projected needs and technical modifications needed to meet these projected needs. Using the information collected from the Lifeline local crisis centers on how often and why veterans reach Lifeline, as opposed to the VCL, VA and SAMHSA officials could then assess whether the extent to which this occurs merits further review and action. Although the results of our test are not generalizable, substantial wait times for a few of our covert calls suggest that some callers may experience longer wait times to speak with a responder in the Lifeline network than they would in the VCL’s network. Without collecting information to examine how often and why veterans do not press “1” when prompted to reach the VCL, VA and SAMHSA cannot determine the extent veterans reach the Lifeline network when intending to reach the VCL and may experience longer wait times as a result. In addition, limitations in information on how often and why this occurs do not allow VA and SAMHSA to determine whether or not they should collaborate on plans to address the underlying causes of veterans not reaching the VCL. Standards for internal control in the federal government state that information should be communicated both internally and externally to enable the agency to carry out its responsibilities. For external communications, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. In June 2014, VA assessed the operational state of the VCL and, based on its findings, designed a performance-improvement plan that outlined actions to address problems VA identified regarding the VCL’s workforce, processes, technology, and infrastructure. To implement this plan, in March 2015 VA began a series of rapid process-improvement events, such as improvements to VCL primary center responder training, designed to solve problems identified by VCL staff and stakeholders with actions that could be implemented within 60 to 90 days. According to VA officials and documentation provided by these officials, these rapid process-improvement events led to several changes at the VCL primary center in 2015 and 2016. As we previously noted, these changes include implementation of staggered shifts; development of silent monitoring procedures, and the hiring of dedicated staff to complete this monitoring; and new call-handling procedures previously discussed. They also include some remaining follow-up activities, such as completing the implementation of remaining planned quality-assurance activities in fiscal year 2016. These measures—if fully implemented—represent positive steps to improve VCL operations. VA has developed additional plans to address other concerns with VCL operations. These plans address issues at the VCL primary center related to renovation of new space, upgrades to telecommunications, and the introduction of a caller queue. Renovation of new space for VCL primary center operations. We found that the VCL primary center responders are housed in two different buildings originally designed for patient care delivery. According to VA officials, these buildings do not reflect call center leading practices that recommend large, open rooms that provide supervisors greater access to the responders they oversee. However, in February 2016, VA officials reported that the department committed funding to relocate the VCL primary center operations to a renovated space on the VA medical center campus. The relocation is to be implemented in two phases. VA officials expect that the first phase, which includes moving administrative and monitoring staff, will be completed in June 2016; the second phase will relocate the rest of the VCL staff, including all responders. VA officials said they anticipate that the second phase will be completed in fiscal year 2017. VA officials told us that they plan on using the National Call Center- Health Resource Center’s large open-space layout as a model in designing the VCL primary center’s new space. According to VA officials, the National Call Center-Health Resource Center follows leading practices for call center operations as set by the International Call Management Institute. Upgrade of VCL primary center telecommunication infrastructure. VA officials told us that the VCL primary center uses the telephone infrastructure of the VA medical center rather than a separate telephone system that would be more conducive to operating a call center. According to a telephone infrastructure change justification that VA information-technology officials prepared, the VCL primary center’s existing telephone system does not meet the requirements for operating a call center of its size. This documentation indicates that improvements are needed in several features of the VCL’s existing telephone system— including call routing, call recording, data capture, and automatic callback. In February 2016, VA officials reported that planned improvements to the VCL primary center’s telephone system would be implemented by June 2016; however, the VCL primary center will continue to operate using part of the VA medical center’s telephone system. Introduction of VCL primary center caller queue. VA’s evaluation of the VCL conducted in 2014 noted that a possible option for improving VCL call response would be to implement a queue at the VCL primary center that would allow callers to wait a longer period for a VCL primary center responder before being sent to a VCL backup call center. Currently, VA allows VCL primary center responders 30 seconds to answer calls before routing them to VCL backup call centers for a response. In February 2016, VA officials told us that they are considering implementing this type of queue. According to these officials, they are considering allowing VCL calls to remain at the VCL primary center for up to 5 minutes and they explained that this 5-minute period was determined based on feedback they received from veterans on how long they would be willing to wait for a responder. These officials further explained that voice prompts would offer callers options as they waited in the queue to reach the next available VCL primary center responder or to be transferred to other VA call centers for concerns unrelated to crisis situations. The VCL plays an important role in providing a means by which veterans and those concerned about them can discuss unique challenges and crises they face, and provides a way to access VA’s mental health care services. However, the rapid growth of the VCL in recent years has coincided with operational and planning challenges that constrain its ability to serve veterans in crisis in a timely and effective manner. To its credit, VA has taken some interim but noteworthy steps to address these challenges. Building on these steps, VA and SAMHSA need to take additional actions to provide reasonable assurance that the VCL’s mission to serve veterans and others in crisis situations is met. As our testing demonstrates, VA has not yet achieved its goal of answering 90 percent of all VCL calls within 30 seconds at the VCL primary center, but its planned and recently implemented changes, such as staggered shifts and enhanced call-handling procedures, are intended to gain VA system efficiencies that will help the department meet its goal once additional responders are hired. However, VA has not applied the same level of attention to its text messaging service and does not regularly test the VCL’s text messaging system. Without doing so, VA cannot ensure that veterans are receiving timely responses from VCL responders to their text messages. In addition, while VA has taken a number of steps to improve its monitoring of the VCL, VA continues to experience challenges related to weaknesses in VCL key performance indicators—including a lack of measurable targets and time frames. If left unresolved, these weaknesses will likely have negative effects on VA’s ability to ensure the VCL is providing the best service possible to veterans. Despite efforts to coordinate the operations of the VCL and Lifeline through an interagency agreement, VA and SAMHSA have not collected information necessary to determine how often and why veterans intending to reach the VCL reach Lifeline instead. As a result, neither VA nor SAMHSA can assess the extent this occurs and the underlying causes that may need to be addressed. To improve the timeliness and quality of VCL responses to veterans and others, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following two actions: regularly test the VCL’s text messaging system to identify issues and correct them promptly; and document clearly stated and measurable targets and time frames for key performance indicators needed to assess VCL performance. We further recommend that under the applicable terms of their interagency agreement, the Secretary of Veterans Affairs and the Secretary of Health and Human Services direct the Under Secretary for Health and the Administrator of the Substance Abuse and Mental Health Services Administration (SAMHSA), respectively, to collaborate and take the following two actions: collect information on how often and why callers intending to reach the VCL instead reach Lifeline local crisis centers; and review the information collected and, if necessary, develop plans to address the underlying causes. We provided a draft of this report to VA and HHS for review and comment. In their written comments, summarized below and reprinted in appendixes II and III, both agencies concurred with our recommendations. VA and HHS described ongoing or planned actions and provided a timeline for addressing our recommendations. HHS also provided technical comments, which we incorporated as appropriate. In response to our first recommendation, to regularly test the VCL’s text messaging system to identify issues and correct them promptly, VA said that the VCL’s analytics department will develop and implement a more robust and proactive system to test daily the VCL’s text messaging service by July 2016. In the interim, VA stated that it has a process for identifying, addressing, and troubleshooting problems that utilizes e-mail templates to notify its contract text service provider of issues or errors that require a response to troubleshoot the error. In response to our second recommendation, to document clearly stated and measurable targets and time frames for key performance indicators needed to assess VCL performance, VA said that it is in the process of developing a monthly scorecard with elements assessing call center, staffing, quality-assurance, and crisis-response metrics with specific performance targets. VA estimates that by October 2016 it would establish targets and time frames for its performance indicators. In response to our third recommendation, to collaborate with SAMHSA to collect information on how often and why callers intending to reach the VCL instead reach Lifeline local crisis centers, VA said that the VCL’s newly formed Clinical Advisory Board would foster collaboration amongst capable experts and leverage their collective expertise in facilitating an improved experience for callers, greater operational efficiencies, and increased access to the VCL for veterans in crisis. VA noted that the Clinical Advisory Board included members of SAMHSA, the VA Suicide Prevention Office, and other VA clinical offices. VA estimates that it would collect sufficient data, conduct a collaborative analysis with SAMHSA, and complete reporting to both agencies on this issue by October 2016. HHS said that in response to this recommendation it would review ways to collect data on callers intending to reach the VCL but instead reaching Lifeline local crisis centers. In response to our fourth recommendation, to collaborate with SAMHSA to review the information collected and, if necessary, develop plans to address the underlying causes for callers intending to reach the VCL instead reaching Lifeline local crisis centers, VA said that the Clinical Advisory Board referenced above would evaluate this issue as a standing agenda item in its monthly meetings. VA said that the Clinical Advisory Board would establish a baseline regarding the frequency of this issue’s occurrence, monitor reported complaints about the press “1” functionality, and provide us with data from Clinical Advisory Board meetings to demonstrate action taken toward implementing our recommendation. VA expects to complete these actions by January 2017. HHS said that in response to this recommendation it would review the data collected as described above and, if necessary, address the underlying causes as appropriate. These VA and HHS actions, if implemented effectively, would address the intent of our recommendations. In its technical comments, HHS emphasized the distinction between the Lifeline network and the VCL, noting that the two programs operate with different policies, procedures, and resources. We revised the draft to more clearly reflect this distinction. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Seto J. Bagdoyan at (202) 512-6722 or [email protected], or Randall B. Williamson at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Department of Veterans Affairs (VA) meets response-time goals for calls, online chats, and text messages received through the Veterans Crisis Line (VCL), we conducted several tests of VCL services during July and August 2015. These tests were designed to measure the timeliness of the VCL’s response to calls, online chats, and text messages. We conducted a covert test of the VCL’s call response time using a generalizable sample of 119 test calls placed in July and August 2015. To develop this generalizable sample, we interviewed VA officials with knowledge about VCL primary and backup call center operations; obtained the VCL primary center’s historical call volume data in hourly increments for fiscal year 2013 through the end of the second quarter of fiscal year 2015; and generated a schedule of days and times during which our test calls would be made. This test call schedule was created by dividing the 62-day sample period into 496 primary sampling units, which we defined as 3-hour blocks of time. We then defined secondary sample units as 10-minute increments within each 3-hour block of time and selected a stratified two-stage random cluster sample of 144 10- minute increments during which our test calls would be made. We selected the 144 10-minute increments by: (1) stratifying the primary sampling units into four strata—overnight, morning, afternoon, and evening—based on time of day; (2) identifying a stratified sample of 36 primary sampling units that were allocated across the four strata based on call volume and our available resources; and (3) randomly selecting four 10-minute increments from each selected primary sampling unit. The results of this test can be used to estimate all VCL callers’ wait times for July and August 2015. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (i.e., a margin of error of within plus or minus a certain number of percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Percentage estimates from our analysis included in this report have a margin of error of within plus or minus 9 percentage points at the 95 percent confidence level. Estimates of the median wait time have a margin of error of within plus or minus 10 percent at the 95 percent confidence level. When placing test calls, we used 20 telephone numbers with randomly selected area codes to mask the origin of the calls. Two analysts then independently measured and documented wait times by reviewing audio recordings of each test call. Wait times were measured from the time that the caller pressed “1” to reach the VCL and the time that a responder answered the test call. The final wait time for each test call was the lowest of the two wait times recorded by these analysts. We successfully completed and measured the wait times for 119 test calls in 30 of the 36 selected primary sampling units. We did not complete calls in 25 of our selected 10-minute increments due to technical or scheduling issues. The resulting completion rate for our test calls was 83 percent (119 out of 144). The omitted test calls were distributed across all four strata and were unrelated to the time of day. To test VA’s online chat and text message response timeliness, we reviewed VA’s procedures and training materials for operating both services. We then interviewed and observed VCL responders at the VCL primary center who responded to online chat and text messages. We also spoke with the VCL text messaging service provider to learn about the text messaging operations. To test the VCL’s online chat and text message response, we scheduled one covert test online chat or text message during each of the 30 primary sampling units used for the generalizable sample described above and recorded our wait times for a response. We measured the wait time for online chats and text messages as the elapsed time between when we sent the online chat or text message to the VCL and when we received a response from a responder. We initiated our test online chats through a link provided on the VCL’s website. We sent test text messages to the VCL through an Internet text messaging service provider in order to record our test data electronically. We removed one test text message attempt from the sample because of technical issues we experienced that may have prevented the message from reaching the VCL. As a result, our final samples consisted of 15 test online chats and 14 test text messages. We verified the reliability of VA’s reported VCL call data by interviewing officials responsible for managing them and reviewing reports that VA’s backup call coverage contractor provided to VA that documented the time, duration, and routing of every VCL call. The routing information included details on the call centers where each call was routed and identified the call center that ultimately answered each call. We were able to identify our test calls in these reports and confirmed that the data matched records we maintained for our test calls. This exercise also allowed us to confirm whether our test calls were answered at the VCL primary center or a VCL backup call center. On the basis of these actions, we found these data to be sufficiently reliable for the purposes of describing the quantity of requests for services reaching the VCL. We used these data to evaluate the timeliness of the VCL’s call response and compared the data to the department’s own goals. To assess the effectiveness of the implementation of staggered shifts for responders at the VCL primary center, we compared VCL call data from September 6, 2015, through December 31, 2015, to that of September 1, 2014, through December 31, 2014. We selected September 6, 2015, as the start date for our 2015 period of analysis because it was the first day that VA fully implemented staggered shifts at the VCL primary center. We chose the cutoff of December 31, 2015, because it corresponded to the most recent complete month of data available at the time of our analysis. We used call data from September 1, 2014, through December 31, 2014, because they reflected a comparable period from the year prior. We used these 2014 data as a comparison group to account for any seasonality patterns, variations, or fluctuations that might affect the demand for VCL services within a particular season, day of the week, or other periods. Our evaluation compared the average hourly call response percentages of the periods we examined and included analysis for the time of day using hourly intervals, day of the week, and holidays within each period. The average hourly response percentages are likely affected by several factors—such as call volume, staffing levels, and complexity of calls, for which we did not control. Our analysis examined differences by day of the week, time of day, and holidays, but did not control for the above- mentioned or other factors that may affect the percentage of calls answered at the VCL primary center. To determine whether callers attempting to reach the VCL who did not press “1” experienced longer wait times than those that did, we conducted a nongeneralizable test. The VCL is accessed by calling a single national toll-free number—1-800-273-TALK (8255)—shared by both the VCL and the National Suicide Prevention Lifeline (Lifeline). This toll-free number is managed by the Substance Abuse and Mental Health Services Administration (SAMHSA). To conduct our nongeneralizable test, we used a random sample of 34 covert test calls to conduct these tests where we mimicked the experience of VCL callers who do not follow the instructions of the voice prompt on this single national toll-free number to press “1” in order to reach the VCL. To do this, we placed two test calls where we did not press “1” as prompted to reach the VCL during each of the scheduled primary sampling units in August 2015. We recorded the wait times for each of the 34 test calls by calculating the amount of time that elapsed between the moment that an automated message informed us that the call was being transferred to a Lifeline local crisis center and when a responder answered our call. We masked the origin of these calls in a manner similar to that described for our generalizable sample of 119 test calls placed to the VCL. Although the 34 test calls were randomly made, the results of these test calls are not generalizable due to the small number of calls included in our sample. In addition to the contacts named above, Gabrielle M. Fagan (Assistant Director), Marcia A. Mann (Assistant Director), James D. Ashley, Dean Campbell, Shaunessye D. Curry, Amber D. Gray, Katherine Nicole Laubacher, Olivia Lopez, Maria McMullen, Brynn P. Rovito, Amber H. Sinclair, and Shana B. Wallace made key contributions to this report. Members of our investigative staff also contributed to the report.
|
VA established the VCL in July 2007 to provide support to veterans in emotional crisis. Between fiscal years 2008, its first full year of operation, and 2015, the number of calls received by the VCL increased almost 700 percent, exceeding VA's expectations. As VA began to address increasing numbers of requests for assistance, reports of dissatisfaction with VCL's service periodically appeared in the media. GAO was asked to review VA's administration of the VCL. This report, among other issues, examines (1) the extent to which VA meets response-time goals for VCL calls and text messages, (2) how VA monitors VCL primary center call center operations, and (3) how VA works with VCL service partners to help ensure veterans receive high-quality service. GAO visited the VCL's primary center and two backup call centers; tested VCL response time through a generalizable sample of covert telephone calls and a nongeneralizable sample of text messages in July and August 2015; reviewed internal reports and policies and plans; and interviewed VA and SAMHSA officials. GAO found that the Department of Veterans Affairs (VA) did not meet its call response time goals for the Veterans Crisis Line (VCL), although extended call wait times were not common. VA's goal is to answer 90 percent of VCL calls at the VCL primary center within 30 seconds. Currently, calls not answered within 30 seconds route to VCL backup call centers; however, for 5 months of fiscal year 2015, calls were routed to VCL backup call centers after 60 seconds. VA officials told GAO that VA data show about 65 to 75 percent of VCL calls were answered at the VCL primary center in fiscal year 2015 within either 30 or 60 seconds. GAO's covert testing in July and August 2015 confirms VA's data. Specifically, 119 covert test calls show that an estimated 73 percent of calls made during this period were answered within 30 seconds. GAO also estimates that 99 percent of all VCL calls during this period were answered within 120 seconds. GAO also covertly tested the VCL's text messaging services and found that 4 of 14 GAO test text messages did not receive responses. VA officials said they do not monitor or test the timeliness and performance of the VCL text message system and instead rely solely on the VCL's text messaging provider for these functions. VA officials told GAO that the provider had not reported any issues with the system, but the provider told GAO that routine testing of the VCL system is not conducted. Without routinely testing its text messaging system or ensuring that its provider does so, VA cannot identify limitations to this service. While VA has taken a number of steps to improve its monitoring of the VCL primary center operations, VA has not developed measurable targets and time frames for its key performance indicators, such as the program's percentage of abandoned calls. VA established a permanent VCL call center evaluation team and created a mechanism for tracking complaints about the performance of the VCL primary center from VCL callers or external parties. However, GAO found that VA has not specified quantifiable or otherwise measurable targets and has not included dates for when it would expect the VCL to complete actions covered by each key performance indicator. This is inconsistent with guidance provided by the Office of Management and Budget. As a result, VA cannot ensure that the VCL is providing consistent, high-quality services to callers and cannot effectively track and publicly report progress or results. VA established an interagency agreement with its service partner, the Department of Health and Human Services' (HHS) Substance Abuse and Mental Health Services Administration (SAMHSA), to manage the shared operations of the VCL and the National Suicide Prevention Lifeline (Lifeline), which include a single national toll-free number used by both. Despite these efforts to coordinate, VA and SAMHSA do not collect information needed to assess how often and why callers intending to reach the VCL do not follow voice prompts and instead reach Lifeline local crisis centers. VA officials told GAO that the type of information that would be needed to do so is not collected because VA has focused on addressing the concerns of those callers who did reach the VCL. In addition, SAMHSA officials said that they do not require Lifeline local crisis centers to collect this type of information, noting that it would be possible to collect it. As a result, VA and SAMHSA do not know the extent to which this occurs and cannot determine the underlying causes that may need to be addressed. GAO recommends that VA regularly test VCL's text messaging system and document targets and time frames for key performance indicators. GAO also recommends that VA and SAMHSA collect information on how often and why callers reach Lifeline when intending to reach the VCL, review this information, and, if necessary, develop plans to address the causes. VA and HHS concurred with GAO's recommendations and described planned actions to address them.
|
Many individuals suffering from advanced chronic obstructive pulmonary disease or other respiratory and cardiac conditions are unable to meet their bodies’ oxygen needs through normal breathing. Supplemental oxygen has been shown to assist many of these patients and is considered a life-sustaining therapy. Physicians prescribe the volume of supplemental oxygen required in liters per minute, or liter flow. Medicare covers supplies and equipment necessary to provide supplemental oxygen if the beneficiary has (1) an appropriate diagnosis, such as chronic obstructive pulmonary disease; (2) reduced levels of oxygen in the blood, as documented with clinical tests; and (3) a physician’s certificate of medical necessity that documents that supplemental oxygen is required. There are three methods, or modalities, for the delivery of supplemental oxygen: oxygen concentrators, which are electrically operated machines about the size of a dehumidifier that extract oxygen from room air; liquid oxygen systems, which consist of both large stationary reservoirs and portable units; and compressed gas systems, which use tanks of various sizes, from large stationary cylinders to small portable cylinders. For most patients, each of the three modalities is equally effective for use as a stationary unit, and clinicians indicated that concentrators can meet the stationary oxygen needs of most patients. Oxygen concentrators account for about 89 percent of the stationary systems used by Medicare patients. Liquid oxygen systems account for about 11 percent of the stationary systems used by Medicare patients. Liquid oxygen systems are preferred by many pulmonologists and respiratory therapists for the less than 2 percent of patients who need a high liter flow—defined by Medicare as 4 or more liters of oxygen per minute. Liquid systems are also sometimes preferred by highly mobile patients because patients can refill lightweight portable liquid units directly from their home stationary reservoirs. Liquid oxygen is usually the most expensive modality for many reasons, including the cost of equipment and the need to use specially equipped delivery trucks, adhere to various regulatory requirements, and replenish a patient’s supply on a regular basis. Compressed gas accounts for less than 1 percent of the stationary systems used by Medicare patients. In addition to a stationary unit for use in the home, about 79 percent of Medicare home oxygen patients have portable units that allow them to perform activities away from their stationary unit and outside the home. The most common portable unit is a compressed gas E tank set on a small cart that can be pulled by the user. Pulmonologists and respiratory therapists advise that patients using supplemental oxygen get as much exercise as possible and believe that lightweight portable equipment can facilitate this activity. Such equipment options for active individuals include portable liquid oxygen units and lightweight gas cylinders, which can be carried in a backpack or shoulder bag. A recent technological improvement in the provision of oxygen is the use of conserving devices, which are more efficient in delivering oxygen and therefore maximize the time a lightweight gas cylinder can last. Without a conserving device, very small tanks only last between 1 and 2 hours at a flow rate of 2 liters per minute, making them impracticable for all but short trips away from home. However, not all patients who need lightweight equipment can use conserving devices. Pulmonary clinicians recommend that all patients be tested to ensure they are proper candidates for this technology, since some patients cannot maintain adequate blood oxygen levels when using conserving devices. In 1997, the monthly fee schedule allowance for a stationary oxygen system was about $300, and in 1998 the allowance was reduced to about $225. Medicare pays 80 percent of the allowance, and the patient is responsible for the remaining 20 percent. The Medicare oxygen allowance covers use of the equipment; all refills of gas or liquid oxygen; supplies such as tubing; and services such as equipment delivery and setup, training for patients and caregivers, periodic maintenance, and repairs. The Medicare monthly allowance for a portable unit was about $48 in 1997 and $36 in 1998. Medicare does not pay an additional allowance for a conserving device, but these devices can lower suppliers’ costs by reducing the frequency of deliveries to their patients. Regardless of the type of oxygen system supplied to a patient, Medicare pays a fixed monthly rate. This type of payment system is intended to give suppliers a financial incentive to lower their costs because they can keep the difference between their Medicare payments and their costs. Suppliers can reduce their costs in various ways, including streamlining operations or utilizing new technology to become more efficient, switching patients to less expensive modalities, and reducing the number or type of patient support services. Some of these approaches can reduce costs while maintaining the quality and adequacy of services. Others, however, could potentially compromise the effectiveness of home oxygen therapy for some Medicare beneficiaries. Most suppliers accept Medicare’s allowance as full payment for home oxygen equipment and file claims directly with the Medicare program through a process known as “assignment.” Suppliers do not have to accept assignment, however, and if they do not, there is no limit to the amount they can charge. The businesses that supply home oxygen to Medicare beneficiaries are diverse, varying in size from small companies run by one or two respiratory therapists to large publicly traded corporations with branches throughout the country. Home oxygen suppliers also include hospital affiliates, franchises, and nonprofit corporations. Some suppliers specialize in home oxygen and other respiratory services, others provide various types of medical equipment and services such as home infusion, and still others are part of a full-service pharmacy. Medicare is the single largest payer for home oxygen for most suppliers we met with, except those who specialize in VA and other large-volume contracts. Some states require that home oxygen suppliers be licensed and have respiratory therapists on staff, but others do not. Many suppliers are accredited by the Joint Commission for Accreditation of Healthcare Organizations, but this accreditation is not required by the Medicare program. Preliminary information indicates that access to home oxygen equipment remains largely unchanged, despite the 25-percent Medicare payment reduction that took effect in January 1998. Medicare claims data revealed little change in use patterns during the first 6 months after the January 1998 payment reduction, and virtually all oxygen suppliers continue to accept assignment for home oxygen. Some beneficiaries are expensive or difficult to serve because they live in rural areas served by few providers, require lightweight portable equipment, or require high-liter-flow liquid oxygen systems. These beneficiaries are, therefore, vulnerable to cutbacks by suppliers. Nevertheless, hospital discharge planners we interviewed said they can still arrange appropriate home oxygen equipment for most patients. In addition, we were told that, in general, the limitations on the availability of certain types of equipment that exist now were present before the payment reductions. Also, although there has been about a 6.5-percent decrease in the number of Medicare home oxygen suppliers, most Medicare patients can still choose from among competing firms. The full range of oxygen modalities continues to be available to Medicare beneficiaries, according to the Medicare claims reports, although oxygen concentrators predominate as the system most commonly provided for home oxygen. As the technology of concentrators continues to improve, oxygen concentrators have been slowly replacing stationary liquid systems. This trend is observed in the aggregate data, which show that claims for liquid stationary systems declined by approximately 12 percent between the first half of 1997 and the first half of 1998. During the same period, the use of portable liquid oxygen systems declined by 11 percent, even though the use of portable systems rose overall. (See table 1.) Another indication that home oxygen access has not been impaired is that the oxygen supplier assignment rates for all modalities have remained relatively unchanged since the 1998 payment reduction. In fact, the claims data show that assignment rates for home oxygen increased slightly between the first half of 1997 and the first half of 1998, leading us to conclude that the suppliers are willing to furnish home oxygen equipment and services even at the reduced rates. Although claims data for the first half of 1998 are not final, our claims data analysis from prior periods indicates that use rates established from preliminary data closely approximate the final results. However, subtle shifts in the kinds of oxygen equipment provided are not evident in aggregate claims data. For example, claims data do not identify the types of portable tanks provided to beneficiaries. Therefore, it is not possible to determine from the claims data how many beneficiaries are receiving lightweight portable tanks and how many are using the cart-mounted E tanks. Similarly, claims data do not indicate the number of refills provided to patients each month, so we could not determine if the frequency of tank refills has changed since the rate reduction. Overall, we found no evidence that home oxygen patients who are more expensive or difficult to serve—such as those who live in rural areas, need lightweight portable equipment, or require high-liter-flow systems—were adversely affected by the payment cuts. In response to the substantial payment reductions, suppliers could have been expected to try to reduce costs, making these higher-cost patients more vulnerable to treatment changes. Although we looked for indications that suppliers had refused to serve these special needs patients, limited the types of equipment made available, or reduced service levels, our interviews with suppliers, discharge planners, patient advocates, and physicians indicated that most Medicare beneficiaries continued to have access to appropriate equipment options. The only indication of access problems that we found occurred in Anchorage, Alaska, where pulmonary clinicians stated that liquid systems are no longer available on assignment to their Medicare patients. Beneficiaries in rural areas have always faced restrictions on home oxygen options, but their access, according to hospital discharge planners we interviewed, appears unchanged. These beneficiaries are more expensive to serve because they are farther from suppliers’ facilities and distances between patients are greater. Suppliers who serve patients in remote areas informed us that it is difficult to support the full range of equipment options because of such factors as vast distances, poor road conditions, and unpredictable weather but that this situation existed before the 1998 payment reductions. Several suppliers told us that they generally cannot provide liquid oxygen to people who live 40 to 60 miles from their facility. However, hospital discharge planners in New Mexico and South Dakota told us that the Medicare payment reduction has not affected their ability to arrange appropriate home oxygen services for their patients, even those who live in the most remote parts of those states. Another challenge in providing adequate options in rural areas is the number of suppliers and the degree of competition for patients. A patient who lives in an isolated South Dakota town may have only one or two suppliers to choose from. Thus, the need to maintain market share may not motivate suppliers in these areas to provide certain costlier equipment and services. In contrast, a representative of a major regional supplier in the Washington, D.C., area said that it had begun to evaluate patients more carefully before providing them liquid systems. Nevertheless, the supplier intended to keep liquid oxygen as an option to maintain positive relationships with referral sources, who can choose from numerous suppliers. Discharge planners in a hospital on Cape Cod, Massachusetts, told us they have not had any problems finding suppliers to take Medicare assignment on liquid oxygen for their patients because Boston and Providence are nearby, and there are many suppliers in the area. In many rural areas, the choice of home oxygen supplier is much more limited. Although the equipment and refill needs of highly mobile patients are more expensive to meet than those of relatively inactive patients, most discharge planners, pulmonary rehabilitation professionals, and suppliers we interviewed believe these patients’ needs are increasingly being met with lightweight, portable gas tanks with conserving devices. This relatively new technology can be less expensive than liquid units and, for patients who can tolerate an oxygen conserving device, still provide greater mobility than heavier gas tanks mounted on carts. We found no indication that patients who require a high-liter-flow system have less access to the proper equipment now than before the payment reduction, except in Alaska. High-liter-flow patients are more expensive to serve than other patients because they require more frequent deliveries of gas or liquid oxygen. The Medicare payment system recognizes that suppliers’ costs are higher for these patients and allows a 50-percent increase in the payment for a stationary unit for patients who require over 4 liters of oxygen per minute. Medicare does not reimburse suppliers separately for the portable unit if the high-liter-flow adjustment is paid, but many of the suppliers we met with agreed that the adjustment adequately compensated them for their added costs. Fewer than 2 percent of paid home oxygen claims were for high-liter-flow patients, which was consistent with information we received from clinicians. Though advances in technology have made oxygen concentrators more effective at delivering flow rates of up to 6 liters per minute, several pulmonologists and respiratory therapists we met with said that liquid oxygen is the preferred option for these patients. Even before the Medicare payment reductions, many suppliers were not providing liquid oxygen for high-liter-flow patients who lived far from their facilities. For these patients, suppliers sometimes provide a high-liter-flow concentrator, link two concentrators together to increase the overall liter flow, or supply compressed gas. The hospital discharge planners and suppliers we talked with said they were able to make arrangements with suppliers for all patients with high-liter-flow needs. In contrast to our findings looking at the country as a whole, we did identify concerns about lack of access to liquid oxygen systems in the Anchorage, Alaska, area. According to the Pulmonary Education and Research Foundation, letters from Medicare beneficiaries, and interviews with a pulmonologist and respiratory therapists in Anchorage, since the Medicare payment reduction, no home oxygen suppliers there have been willing to accept Medicare assignment for liquid oxygen. While liquid oxygen systems had not generally been available in remote areas of Alaska, as in the remote parts of other states, at least one supplier was providing home liquid oxygen systems to patients in the Anchorage area on assignment before the payment reduction. After the payment reduction, the supplier replaced its liquid systems with concentrators for stationary units and either E tanks or lightweight gas tanks with conserving devices for portable use, depending on the patient’s activity level. For most patients, this was an acceptable alternative. However, some patients cannot tolerate the conserving devices or are unable to maneuver E tanks on carts, especially in the snow. Respiratory therapists in Anchorage informed us that some patients are now unable to leave their homes without help. Because there are no suppliers willing to take Medicare assignment for liquid oxygen, these patients have no other options for lightweight portable systems without incurring significant out-of-pocket costs. The mid-1990s was a period of expansion for the home oxygen industry, characterized by growth in the total number of home oxygen suppliers. This trend was reversed in 1998 after the lower Medicare payment rates took effect, as some supply companies merged or left the marketplace. Nevertheless, sufficient competition remained, providing most patients with a choice of suppliers. In addition to industry consolidation, suppliers have implemented a variety of strategies to improve the efficiency of operations and reduce costs. Overall, the number of Medicare home oxygen suppliers has declined by about 6.5 percent since the January 1998 payment reduction. The market share of the largest suppliers increased slightly from 40 percent in the first half of 1997 to 43 percent in the first half of 1998. (See table 2.) Many of the suppliers that have stopped submitting claims to Medicare for home oxygen had not previously offered the full range of home oxygen equipment options to beneficiaries but had supplied predominantly oxygen concentrators. In 1994, over 1,300 Medicare suppliers, or 22 percent, received at least 98 percent of their Medicare home oxygen revenues for concentrators and focused on serving the least costly patients. By the first half of 1998, this number had fallen to just over 1,000 firms. (See table 3.) When we asked suppliers how they have responded to the payment cuts, many said they have developed strategies to improve efficiency and maintain their profitability. These strategies include operational adjustments, such as making less frequent deliveries and service visits, purchasing more reliable equipment, reducing staff, and using fewer credentialed respiratory therapists. According to suppliers and industry representatives, some suppliers have reevaluated their product lines because, prior to the payment cuts, oxygen revenues had often subsidized less profitable medical equipment items. Other suppliers have switched patients from liquid oxygen to less expensive systems or are screening new patients more carefully before setting them up with a liquid unit. These strategies have left overall access to home oxygen equipment substantially the same, but they have changed the way that home oxygen equipment and services are provided to Medicare beneficiaries. Some suppliers we interviewed said they are maintaining their current levels of service, including providing a range of equipment options and using credentialed therapists for patient visits, for two reasons: their internal standards of patient care and their need to remain competitive with other suppliers. Many other suppliers said that they have reviewed the services they provide to determine where to reduce costs. Their strategies include more completely assessing patients’ need for liquid oxygen, carefully planning delivery routes, calling patients in advance to find out what supplies they need, keeping their trucks stocked with supplies to avoid extra trips, and reducing the frequency of maintenance visits. There is also anecdotal evidence that some suppliers, contrary to Medicare rules, have refused to deliver portable tanks when patients need refills or have limited their patients to a fixed number of refills per month. We were unable to document these practices. One supplier we talked with conducted a review of patients already on liquid oxygen to determine who could be switched to concentrators and portable lightweight gas systems equipped with an oxygen conserving device. This supplier said he consulted every patient’s physician and obtained permission to make the equipment change. Further, the patients were tested to ensure that they were able to tolerate the new lightweight portable equipment. Other firms stated that while they will not change the oxygen delivery systems they are currently providing to patients, they will provide liquid systems to new patients only if they have high-liter-flow needs or if their ambulatory needs cannot be met with the compressed gas systems available. In a November 1997 report, we made several recommendations to HCFA about its implementation of the BBA provisions, including that it monitor trends in Medicare beneficiaries’ access to the various types of home oxygen equipment; restructure the modality-neutral payment, if warranted; educate prescribing physicians about their right to specify the home oxygen systems that best meet their patients’ needs; and establish service standards for home oxygen suppliers. HCFA has made only modest beginnings in addressing the BBA provisions and our recommendations. As required by the BBA, HCFA has contracted with a PRO to evaluate access to and quality of home oxygen equipment and services provided to Medicare patients. The PRO plans to gather evidence from various sources, including Medicare claims data on equipment use patterns, hospitalization rates, and utilization of home health services by home oxygen patients. An important component of this study will be a survey of beneficiaries, suppliers, and physicians. Changes in supplier practices will be an indicator of the impact of the payment reduction. The PRO will use this information to assess whether the payment reduction has affected the types of equipment and level of services provided to home oxygen patients. HCFA has not decided whether this will be a one-time assessment or an ongoing effort to monitor trends. Results from the PRO study are not expected until January 2000. The BBA gave HHS the authority to restructure the modality-neutral payment system for home oxygen, but HCFA has not established an ongoing process for monitoring access to determine if such a restructuring is warranted. HCFA officials said they will use the results of the PRO study and the competitive bidding demonstration project to evaluate the need to restructure the oxygen payment system. However, the PRO study will not be completed until at least January 2000, or 2 years after the first payment reduction, and neither project will provide HCFA information on access problems as they develop. HCFA has the ability to monitor access indicators but has not done so. For example, HCFA could ask its contractors to track beneficiary complaints, such as insufficient refills of portable tanks or, as occurred in Anchorage, problems with access to liquid oxygen systems. Although HCFA’s claims processing contractors can specially code and track beneficiary inquiries and complaints about specific equipment and services, such as home oxygen, HCFA has not asked them to do so. Prescribing physicians and patients could better help HCFA identify access problems if they were fully informed about the home oxygen benefit. Although HCFA is able to identify both groups from claims data, HCFA has not provided these groups with information about the Medicare payment cuts or encouraged them to report access problems. For example, the pulmonary physician and therapists at the Anchorage clinic we spoke with did not know what equipment and services the Medicare home oxygen benefit covers. The National Association for Medical Direction of Respiratory Care believes that HCFA has done little to help educate doctors about their options when prescribing home oxygen. Similarly, patients may be unaware that the Medicare allowance covers all their oxygen needs, including home delivery of equipment and needed refills of portable tanks. In contrast, many VA Medical Centers provide brochures to home oxygen patients outlining the responsibilities of both the patient and the supplier. Despite the BBA mandate and our recommendations and those of HHS’s Office of the Inspector General, HCFA has not developed service standards for oxygen suppliers beyond generic requirements for all durable medical equipment suppliers. In contrast, most VA and managed care contracts specifically define service requirements, such as the frequency of maintenance visits and the level of patient education. Service standards would define what Medicare is paying for and what beneficiaries should expect from suppliers. Standards are even more important as suppliers respond to reduced payment rates. One HCFA official told us that HCFA must address those BBA requirements that have specific target dates, as well as Year 2000 computer issues, before attending to our recommendations and those of the Office of the Inspector General. HCFA has developed a set of service standards that will apply only to home oxygen suppliers that participate in the competitive pricing demonstration project. HCFA officials informed us that they will consider the effectiveness of these standards in the development of service standards applicable to all home oxygen suppliers. However, some industry representatives have criticized the demonstration project standards as being too limited to ensure an acceptable level of service for home oxygen patients. Early evidence suggests that the reduction in Medicare payment rates for home oxygen has not had a major impact on access. Generally, the access problems that we found existed before the payment reductions occurred. The PRO study HCFA has contracted for will provide a more in-depth look at this issue. Suppliers are responding in various ways to the lower payment rates. Consolidation continues to occur in the home oxygen industry, leaving fewer small firms that do not provide a full range of oxygen services. Most companies have developed varying strategies to mitigate the impact of the payment reduction, including reevaluations of operations, which have led to increased operating efficiencies and changes in how suppliers provide their patients with equipment and services. Despite these early indications that access to home oxygen has not diminished since the implementation of the payment reductions, subtle access issues may not be readily apparent, and additional problems could emerge as more and better information becomes available. Given the importance of this benefit to some vulnerable Medicare beneficiaries, especially those who live in rural areas, are highly active, or require a high liter flow, HCFA needs to be vigilant in its efforts to detect any problems. Beyond contracting for the PRO study, HCFA has not established an ongoing method for monitoring the use of this benefit and gathering the information essential to assessments of the modality-neutral payment system. Nor has HCFA developed service standards for home oxygen suppliers as required by the BBA. The continued absence of specific service standards allows suppliers themselves to decide what services they will provide home oxygen patients. We recommend that the Administrator of HCFA do the following: monitor complaints about and analyze trends in Medicare beneficiaries’ use of and access to home oxygen equipment, paying special attention to patients who live in rural areas, are highly active, or require a high liter flow; on the basis of this ongoing review, as well as the results of the PRO study, consider whether to modify the Medicare payment method to preserve access; and make development of service standards for home oxygen suppliers an agency priority in accordance with the BBA’s requirement to develop such standards. We provided draft copies of this report to HCFA, representatives of the home oxygen industry, and officials of associations representing respiratory care specialists and physicians who treat patients with chronic lung disease. The reviewers suggested some technical corrections, which we incorporated into the report. Generally, HCFA agreed with the report’s contents and concurred with our recommendations. HCFA emphasized that it has contracted for the BBA-mandated PRO study, which it believes will provide an assessment of access to home oxygen equipment. In the interim, HCFA said it is relying on this report to alert the agency to any immediate access problems. Further, HCFA believes that the payment reduction will not disrupt patient access to the home oxygen benefit, given the previous excessive rates. In light of efforts to address the Year 2000 computer issues confronting the agency and its limited resources, HCFA felt it had adequately addressed the need to monitor access to the home oxygen benefit. HCFA acknowledged that it has not developed specific service standards for the home oxygen benefit as required by law. However, officials stated that the agency intends to publish new service standards applicable to all durable medical equipment suppliers in the next few months. After that, it plans to develop specific service standards for the home oxygen benefit. While we acknowledge the extent of HCFA’s responsibilities, we believe that waiting for the PRO study to evaluate access issues is not prudent, considering the life-sustaining nature of this benefit to its users. We believe that HCFA could take steps now, with a minimal expenditure of resources, that could not only supplement the results of the PRO study but also alert the agency to access problems before the PRO study is released. HCFA stated that it will have its regional offices and contractors monitor complaints regarding access to home oxygen. The full text of HCFA’s comments is included as an appendix. Industry representatives and directors of associations representing respiratory care specialists and physicians also generally agreed with the report’s contents. However, industry representatives believe that our definition of access to home oxygen equipment should include not only the equipment provided Medicare beneficiaries but also the types of services provided them and their frequency. These industry representatives are concerned that any service standards developed by HCFA will be inadequate to ensure an acceptable level of care. They believe that clinical studies of the effects of various services on patient outcomes are necessary to fully evaluate the impact of the payment reduction. They also believe that the cost savings resulting from the payment reduction for home oxygen could be offset by higher hospital readmissions or other services used by oxygen users. Finally, they stated that the full impact of the payment reduction has not yet been felt and that monitoring of access should continue. For the purposes of this report, we based our definition of access on the Medicare coverage guidelines for the home oxygen benefit. HCFA has not defined specific service standards for this benefit, and it would not be appropriate for us to expand HCFA’s current definition of what is covered by the home oxygen benefit. Further, while evaluating patient outcomes was beyond the scope of this report, the PRO study will include specific patient outcomes, such as hospital readmissions and use of home health services, in its evaluation. We are sending copies of this report to Ms. Nancy-Ann Min DeParle, Administrator, Health Care Financing Administration, and appropriate congressional committees. We will also make copies available to others upon request. This report was prepared by Anna Kelley, Frank Putallaz, and Suzanne Rubins under the direction of William Reis, Assistant Director. Please call Mr. Reis at (617) 565-7488 or me at (202) 512-7114 if you or your staff have any questions about the information in this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO provided information on Medicare beneficiaries' access to home oxygen equipment, focusing on: (1) changes in access to home oxygen for Medicare patients since the payment reduction mandated by the Balanced Budget Act (BBA) of 1997 took effect; and (2) actions taken by the Health Care Financing Administration (HCFA) to fulfill the BBA requirements and respond to GAO's November 1997 recommendations. GAO noted that: (1) preliminary indications are that access to home oxygen equipment remains substantially unchanged, despite the 25-percent reduction in Medicare payment rates that took effect in January 1998; (2) the number of Medicare beneficiaries using home oxygen equipment has been increasing steadily since 1996, and this trend appears to have continued in 1998; (3) while Medicare claims for the first 6 months of 1998 showed a decrease in the proportion of Medicare patients using the more costly stationary liquid oxygen systems, this decline was consistent with the trend since 1995; (4) hospital discharge planners and suppliers GAO talked with said that even Medicare beneficiaries who are expensive or difficult to serve are able to get the appropriate systems for their needs; (5) further, suppliers accepted the Medicare allowance as full payment for over 99 percent of the Medicare home oxygen claims filed for the first half of 1998; (6) although these indicators do not reveal access problems caused by the payment reductions, issues such as sufficiency of portable tank refills and equipment maintenance could still arise; (7) HCFA has responded to only one BBA requirement; (8) as required by the BBA, HCFA has contracted with a peer review organization (PRO) for an evaluation of access to, and quality of, home oxygen equipment; (9) results from this evaluation are not expected before 2000; (10) meanwhile, HCFA has not implemented an interim process to monitor changes in access for Medicare beneficiaries--a process that could alert the agency to problems as they arise; (11) although not required by the BBA, such monitoring is important because of the life-sustaining nature of the home oxygen benefit; (12) until HCFA gathers more in-depth information on access and the impact of payment reductions, HCFA cannot assess the need to restructure the modality-neutral payment; (13) HCFA has not yet implemented provisions of the BBA that require service standards for Medicare home oxygen suppliers to be established as soon as practicable; and (14) service standards would define what Medicare is paying for in the home oxygen benefit and what beneficiaries should expect from suppliers.
|
According to State, a terrorist safe haven is an area of relative security that can be exploited by terrorists to undertake activities such as recruiting, training, fundraising, and planning operations. The National Commission on Terrorist Attacks Upon the United States (9/11 Commission) noted that the physical safe haven in Afghanistan allowed al Qaeda the operational space to gather recruits and build logistical networks to plan the September 11, 2001, terrorist attacks. Concluding that the dangers posed by terrorist safe havens were significant, the 9/11 Commission recommended that the U.S. government identify and prioritize terrorist safe havens, as well as develop strategies to address them. The United States highlights the denial of safe haven to terrorists as a key national security concern in a number of U.S. government and agency strategic documents. For example, National Security Strategies released in 2002, 2006, and 2010 emphasize the importance of denying safe haven to terrorists. In addition, plans issued by various U.S. agencies, such as DOD, DOJ, State, and USAID, as well as the National Intelligence Strategy issued by the Office of the Director of National Intelligence, include language emphasizing the importance of addressing terrorist safe havens (see fig. 1). “Wherever l-Q’id or it terroriffilite ttempt to eabli safe hven … we will meet them with growing pressure … Thee effort will focus on informtion-ring, lw enforcement coopertion, nd eablihing new prctice to conter evolving dversarie. We will o help te … build their ccity for reponle governnce nd ecrity throgh development nd ecrity ector assnce.” “The Wr on Terror … involve the ppliction of ll intrment of ntionl power nd inflence to kill or cptre the terrori; deny them safehven nd control of ny ntion; prevent them from gining ccess to WMD; render potentil terrorit trget less ttrctive trengthening ecrity; nd ct off their rce of fnding nd other rerce they need to operte nd survive.” “One of the mot importnt rerce to extremi safe hven. Safe hven provide the enemy with reltive freedom to pln, orgnize, trin, ret, nd condct opertion.” “The mot intrctable safe hven exiastride interntionorder nd in region where ineffective governnce llow their preence; we must develop the me to deny thee hven to terrori.” “Filed te nd ngoverned ce offer terrorind criminl orgniztion safe hven nd possle ccess to wepon of mass detrction (WMD).” “Deny safe hven to criminl orgniztion involved in drg-relted terrorictivitie.” State’s Office of the Coordinator for Counterterrorism coordinates policies and programs of U.S. agencies to counter terrorism overseas. According to State, the Office of the Coordinator for Counterterrorism works with all appropriate elements of the U.S. government to ensure integrated and effective counterterrorism efforts that utilize diplomacy, economic power, intelligence, law enforcement, and military power. These elements include those in the White House, DOD, DHS, DOJ, State, Treasury, USAID, and the intelligence community. The Office of the Coordinator for Counterterrorism’s role is to provide supervision of international counterterrorism activities, including oversight of resources. Its guiding principles reflect the goals of the National Strategy for Combating Terrorism, including denying safe haven to terrorists. Congress has enacted several laws that require the submission of reports to Congress on issues related to the denial of terrorist safe havens. See table 1 for selected legislation. In response to reporting requirements, State annually releases the Country Reports on Terrorism. State’s August 2010 report includes a strategic overview of terrorist threats and a country-by-country discussion of foreign government counterterrorism cooperation. In addition, it includes chapters on weapons of mass destruction terrorism, state sponsors of terrorism, designated foreign terrorist organizations, and terrorist safe havens. According to State, the Country Reports on Terrorism aims to enhance understanding of the terrorist threat, as well as serve as a reference tool to inform policymakers, the public, and U.S. foreign partners about U.S. efforts, progress, and challenges in the campaign against international terrorism. While released by State’s Office of the Coordinator for Counterterrorism, the Country Reports on Terrorism incorporates the views of the National Counterterrorism Center and National Security Staff, as well as other key agencies involved in addressing international terrorism. State identifies existing terrorist safe havens in its annual Country Reports on Terrorism, but does not assess these safe havens with the level of detail recommended by Congress. IRTPA requires State to include a detailed assessment in its annual Country Reports on Terrorism of each country whose territory is being used as a terrorist sanctuary, also known as a terrorist safe haven. The act further recommends that these assessments include, to the extent feasible, details regarding the knowledge of and actions taken to address terrorist activities by countries whose territory is being used as a terrorist safe haven. While State has identified existing terrorist safe havens since 2006, its assessments of these safe havens do not always include the details recommended by Congress. For instance, none of the assessments in State’s August 2010 report included information on the actions taken by countries identified as having terrorist safe havens to prevent trafficking in weapons of mass destruction through their territories. Including this information in State’s reporting could help inform congressional oversight related to terrorist safe havens. IRTPA requires State to include a detailed assessment in its annual Country Reports on Terrorism with respect to each foreign country whose territory is being used as a safe haven for terrorists or terrorist organizations. To fulfill this requirement, State first identifies and then assesses existing terrorist safe havens. Since 2006, State has identified existing terrorist safe havens in a dedicated chapter of its Country Reports on Terrorism. In August 2010, State identified 13 terrorist safe havens. See figure 2 for the terrorist safe havens identified. The online version of this map is interactive. Hover your mouse over each text box to read State’s August 2010 assessment of why each identified country or region is considered a terrorist safe haven. To view these assessments in the offline version, please see appendix II. State has made few changes to the terrorist safe havens identified in its report since the April 2007 Country Reports on Terrorism, which identified 15 terrorist safe havens. Since that report, State has removed two terrorist safe havens—the Afghan-Pakistan Border and Indonesia— from the Country Reports on Terrorism. State officials explained that the Afghan-Pakistan Border was removed in 2009, but Afghanistan and Pakistan are each still identified as terrorist safe havens to highlight the different safe haven issues facing each country. State officials said that Indonesia was removed in 2008 because the country passed counterterrorism legislation and captured several members of the terrorist group Jemaah Islamiyah. IRTPA includes congressional findings that the planning of complex terrorist operations requires safe haven from government and law enforcement interference and that terrorists remain focused on finding such safe havens. Further, IRTPA states that it is the sense of Congress that it should be U.S. policy to identify foreign countries that are being used as terrorist sanctuaries and assess current U.S. tools being used to assist foreign governments to eliminate these safe havens. Accordingly, IRTPA requires State to include detailed assessments of terrorist safe havens in its annual Country Reports on Terrorism. IRTPA also states that these detailed assessments should include, to the extent feasible, a variety of provisions, including information regarding knowledge of and actions to address terrorist activities taken by countries whose territory is being used as a terrorist safe haven. See table 2 for a list of these details. In its Country Reports on Terrorism, State includes a terrorist safe havens chapter with assessments of each terrorist safe haven it identifies to explain why that country or region has been classified as a terrorist safe haven. However, our analysis of the assessments in State’s August 2010 report determined that, while State included information on each identified terrorist safe haven, State did not assess them with the level of detail recommended by Congress. For instance, our evaluation determined that while State generally included information on the extent of knowledge by the government of the country with respect to terrorist activities, it did not include any information in its assessments about the actions countries took to prevent the proliferation of and trafficking in weapons of mass destruction in and through their territories. We also analyzed the “country reports” chapter of State’s August 2010 report and found that some of the information not included in the assessments in the terrorist safe haven chapter was contained in the country reports chapter. For instance, the country report for Yemen contained information regarding the Yemeni government’s actions to cooperate with U.S. counterterrorism efforts. However, like the terrorist safe haven assessments, none of these country reports contained information regarding the actions that countries took to prevent the proliferation of and trafficking in weapons of mass destruction in and through their territories. Table 3 shows the number of safe havens for which State included the recommended details. State officials agreed that details related to the trafficking of weapons of mass destruction through terrorist safe havens were not included in its August 2010 report. These officials stated that time constraints and a limited number of staff present challenges to including these details in the terrorist safe haven assessments. Despite these challenges, officials told us that, after reviewing our analysis, they will gather—and believe they will be able to include—details regarding weapons of mass destruction in the Country Reports on Terrorism to be released in 2011. In previous reporting, we have found that assessments can be used to define requirements and properly focus programs to combat terrorism. Moreover, in IRTPA, Congress has said that it should be U.S. policy to assess U.S. efforts to assist foreign governments to address terrorist safe havens. As such, including all of the details recommended by Congress in the safe haven assessments in State’s Country Reports on Terrorism could help improve congressional understanding and inform congressional oversight related to terrorist safe havens. The U.S. government has not fully addressed reporting requirements to identify U.S. efforts to deny safe haven to terrorists. Congress required the President to submit reports outlining U.S. government efforts to deny or disrupt terrorist safe havens in two laws, IRTPA and the National Defense Authorization Act for fiscal year 2010. While reports produced in response to IRTPA contain some information on U.S. efforts to address terrorist safe havens, none provides a comprehensive, governmentwide list of U.S. efforts. According to agency officials, compiling a list of U.S. efforts is challenging because of difficulties determining which U.S. efforts specifically address terrorist safe havens. However, a more comprehensive list of U.S. efforts would enhance oversight activities, such as assessing U.S. efforts toward the governmentwide goal of denying safe haven to terrorists. IRTPA required the President to submit a report to Congress that includes an outline of the strategies, tactics, and tools of the U.S. government for disrupting or eliminating the security provided to terrorists by terrorist safe havens. IRTPA also recommended that State update the report annually, to the extent feasible, in its Country Reports on Terrorism. IRTPA notes that it is the sense of Congress that it should be the policy of the United States to implement a coordinated strategy to prevent terrorists from using safe havens and to assess the tools used to assist foreign governments in denying terrorists safe haven. In response to IRTPA provisions, State submitted a report to Congress in April 2006, which it has updated annually as part of its Country Reports on Terrorism. These reports include a section on U.S. strategies, tactics, and tools that identifies several U.S. efforts to address terrorist safe havens. In the August 2010 Country Reports on Terrorism, State identified several U.S. efforts for addressing terrorist safe havens, including programs such as State’s Regional Strategic Initiative, Rewards for Justice, and Antiterrorism Assistance programs. See table 4 for the list of U.S. efforts identified in State’s August 2010 Country Reports on Terrorism. However, State’s August 2010 Country Reports on Terrorism did not include some U.S. efforts that may contribute to addressing terrorist safe havens according to our review of related budget information, strategic documents, and discussions with U.S. officials. Specifically, the list of U.S. efforts to address terrorist safe havens in the Country Reports on Terrorism did not include (1) all of the programs and activities State funds to address terrorist safe havens and (2) programs and activities funded by agencies other than State, such as DOD, DOJ, and Treasury that may contribute to addressing terrorist safe havens. State’s budget information, strategic documents, and discussions with State officials indicate that some State-funded efforts that may contribute to addressing terrorist safe havens were not included in the August 2010 Country Reports on Terrorism. First, budget information in State’s Foreign Assistance Coordination and Tracking System (FACTS) identifies programs and activities to eliminate safe havens that were not included in State’s August 2010 Country Reports on Terrorism. State identified in its budget database budget accounts that fund programs and activities for eliminating safe havens. However, certain activities funded by four of these accounts were not included in State’s August 2010 Country Reports on Terrorism. For example, activities in Chad, a Trans-Saharan country, funded by the development assistance budget account were identified in FACTS as eliminating safe havens, as were some activities in Pakistan funded through the Economic Support Fund. However, neither of these budget accounts was included in State’s August 2010 Country Reports on Terrorism. Second, selected State strategic documents identify additional efforts funded by State that may contribute to denying terrorists safe haven but were not included in the August 2010 Country Reports on Terrorism. For the Philippines, Somalia, and Yemen, we reviewed each country’s fiscal year 2012 mission strategic and resource plan (MSRP), submitted in April 2010, which included program funding information for goals related to addressing terrorist safe havens for fiscal years 2009 through 2015. (For more information on these three countries, see appendix IV.) Our review identified several examples of State-funded efforts that may contribute to addressing terrorist safe havens but were not included in State’s August 2010 Country Reports on Terrorism. For example, the Yemen MSRP indicates that the State-funded Foreign Military Financing program contributed to addressing safe havens in Yemen by funding activities to support border security and counter piracy. In addition, the MSRP for the Philippines included Foreign Military Financing program activities to sustain progress in developing the Philippine Defense Department capability to address terrorist safe havens. However, this program was not included in State’s August 2010 Country Reports on Terrorism. Moreover, USAID development assistance in the Philippines focuses on mitigating conflict, increasing economic opportunities, strengthening health services, and improving education, which, according to the country’s MSRP, can inhibit terrorists from exploiting those living under marginal conditions. Development assistance was not included in State’s August 2010 report. Third, according to State officials, additional efforts undertaken by State, but not identified in State’s August 2010 Country Reports on Terrorism, may contribute to addressing terrorist safe havens. For example, State officials indicated that activities funded through State’s Peacekeeping Operations account contributed to addressing the terrorist safe haven in Somalia because it helped the Transitional Federal Government of Somalia keep the terrorist group al-Shabaab from gaining control of the country’s capital city, Mogadishu. In addition, State’s International Narcotics and Law Enforcement Affairs funded a DOJ International Criminal Investigative Training Assistance Program effort in the Philippines that may contribute to addressing the terrorist safe haven in that country by providing police development and capacity building programs in areas used by terrorists for illicit travel. Similarly, officials indicated that State- funded Immigrations and Customs Enforcement training for Filipino and Yemeni officials to combat money laundering and bulk cash smuggling may contribute to addressing the safe havens in their countries. State- funded Peacekeeping Operations, the International Criminal Investigative Training Assistance Program, and Immigration and Customs Enforcement training programs were not included in State’s August 2010 Country Reports on Terrorism. In total, our analysis identified nine examples of State-funded efforts in the Philippines, four examples in Somalia, and nine examples in Yemen not included in State’s August 2010 Country Reports on Terrorism that may contribute to addressing terrorist safe havens. Table 7 in appendix V describes U.S. efforts funded by State to address terrorist safe havens as identified by agency officials or MSRPs for the Philippines, Somalia, and Yemen and indicates which of these efforts were included in State’s August 2010 report. Agency officials explained that compiling a list of U.S. efforts, such as the one in State’s Country Reports on Terrorism, is challenging because of difficulties determining which U.S. efforts specifically address terrorist safe havens. According to State and USAID officials, counterterrorism programs and activities may simultaneously contribute to multiple foreign policy goals. For example, according to State officials, U.S. programs assisting refugees on the Somali border may be considered as combating violent extremism or denying terrorists safe haven. Similarly, USAID officials explained that governance programs in Yemen aim to help local governments meet community needs, but in doing so may contribute to addressing terrorist safe havens in Yemen by shrinking the terrorists’ operating spaces in those communities. However, documents authored and databases managed by State contain information on additional U.S. efforts to address terrorist safe havens that would be feasible to include in State’s reporting. Our discussions with officials from various agencies and our review of MSRPs from the Philippines, Somalia, and Yemen indicate that State’s reports also do not include efforts funded by agencies other than State that may contribute to addressing terrorist safe havens. First, officials from DOD, DOJ, and Treasury indicated that their agencies fund efforts that may contribute to addressing terrorist safe havens. Officials from DOD indicated that DOD-funded activities to build capacity of foreign partners’ security forces to combat terrorism are key DOD efforts to address terrorist safe havens. For example, some DOD train and equip activities funded through the department’s Global Train and Equip “Section 1206” and country-specific funding accounts, such as the Afghanistan and Iraq Security Forces Funds, contribute to addressing terrorist safe havens. For example, DOD has used Section 1206 funding to conduct train and equip programs to build the capacity of security forces in Yemen and the Philippines to conduct counterterrorism operations. U.S. Coast Guard officials indicated that some coastal security training and technical assistance activities funded largely by DOD and implemented by the U.S. Coast Guard may also contribute to addressing terrorist safe havens in Yemen and the Philippines. Second, our review of MSRPs for the Philippines, Somalia, and Yemen indicated that additional efforts funded by agencies other than State and not included in State’s August 2010 Country Reports on Terrorism may contribute to addressing terrorist safe havens. For example, the safe haven-related goal in the fiscal year 2012 MSRP for the Philippines indicated that efforts will be made through the DOD Joint Special Operations Task Force–Philippines to enhance counterterrorism capabilities of the Armed Forces of the Philippines. The safe haven-related goal in Yemen’s fiscal year 2012 MSRP indicated that DOJ’s Federal Bureau of Investigation legal attachés participate in activities that empower Yemeni law enforcement officials to better identify and prosecute suspected terrorists. These efforts were not included in State’s August 2010 Country Reports on Terrorism. In total, our analysis identified seven examples of non-State-funded efforts in the Philippines, one example in Somalia, and five examples in Yemen that were not included in State’s August 2010 Country Reports on Terrorism that may contribute to addressing terrorist safe havens, as shown in table 5. IRTPA calls for a report on the activities of the U.S. government to address terrorist safe havens, and a stated intention of the Country Reports on Terrorism is to provide policymakers with an overview of U.S. counterterrorism efforts. As such, State’s report is incomplete without including the contributions of its various interagency partners to address terrorist safe havens. As this information is included in State documents, and State approves certain activities funded by other agencies such as DOD’s Section 1206 and 1205 programs, it is feasible for State to include this information in its annual report. In addition to the provisions in IRTPA, Congress demonstrated an ongoing interest in the identification of U.S. efforts to deny terrorist safe havens in the National Defense Authorization Act for fiscal year 2010. The conference report accompanying the act noted that existing executive branch reporting on counterterrorism does not address the full scope of U.S. activities or assess overall effectiveness. The National Defense Authorization Act for fiscal year 2010 requires the President to submit a report to Congress on the U.S. counterterrorism strategy, including an assessment of the scope, status, and progress of U.S. counterterrorism efforts in fighting al Qaeda and its affiliates and a list of U.S. counterterrorism efforts relating to the denial of terrorist safe havens. The act required the President to produce this report by September 30, 2010, and every September 30th until September 30, 2012. According to the act, the report is to be submitted in an unclassified form to the maximum extent practicable and accompanied by a classified appendix, as appropriate. According to the conference report accompanying the act, the required report would help Congress in conducting oversight, enhance the public’s understanding of how well the government is combating terrorism, and assist the administration in identifying and overcoming related challenges. According to the President’s National Security Staff, the National Security Council has been assigned responsibility for completing the report required under the National Defense Authorization Act for fiscal year 2010. However, officials on the national security staff—who are taking the lead in drafting the report—stated that while they were working on a draft, no report had been submitted to Congress as of March 2011. They were unsure when the report—including information requested by Congress to assist it in assessing the success of counterterrorism efforts to deny terrorists safe haven—would be completed. Given that dismantling terrorist safe havens is a top U.S. national security priority, it is important that accurate assessments of and comprehensive information on terrorist safe havens and U.S. efforts to address them is available. Congress has expressed its desire to receive this type of information in order to better understand the status of efforts related to terrorist safe havens and to better assess U.S. efforts to address them. While some reports have been provided to Congress on these issues, critical details recommended by Congress are not included in these documents, such as complete assessments of the actions taken by countries identified as terrorist safe havens to address terrorist activities. Further, despite multiple requests from Congress, neither State nor the National Security Council has compiled a list of U.S. efforts to address terrorist safe havens that includes the contributions of all relevant U.S. agencies. Providing this type of information to Congress could better define the nature of the threats posed by terrorist groups, as well as the status of and challenges faced by U.S. efforts to address them. Without this information, Congress and other decision makers may lack facts essential to assessing progress toward the U.S. goal of denying terrorists safe haven, making decisions on the allocation of resources, and conducting effective oversight. To improve the information provided to Congress and other decision makers, we make the following three recommendations: 1. The Secretary of State should include in the Country Reports on Terrorism detailed assessments of identified terrorist safe havens using the provisions recommended by Congress in IRTPA. 2. The Secretary of State, in collaboration with relevant agencies as appropriate, should include a governmentwide list of U.S. efforts for addressing terrorist safe havens when it updates the report requested under IRTPA. 3. The National Security Council, in collaboration with relevant agencies as appropriate, should complete the requirements of the National Defense Authorization Act for fiscal year 2010 to report to Congress on a list of U.S. efforts related to the denial of terrorist safe havens. We provided a draft of this report to DOD, DHS, DOJ, State, Treasury, USAID, the Office of Management and Budget, the National Security Council, and members of the intelligence community for their review and comment. State and DHS provided written comments, which are reprinted in appendix VI and VII respectively. In addition, DOD, DHS, DOJ, State, and the Office of Management and Budget provided technical comments, which we have incorporated as appropriate. The National Security Council reviewed the report but did not provide comments on its recommendations. State concurred with our recommendation that it include detailed assessments of terrorist safe havens in its Country Reports on Terrorism, and noted it will implement this recommendation in its updated report to be released in 2011. Related to our recommendation for State to include a governmentwide list of efforts to address terrorist safe havens when it updates the report requested under the IRTPA, State concurred that reporting on U.S. efforts to deny terrorist safe havens should be more comprehensive. However, State did not agree that such a list should be part of its annual Country Reports on Terrorism, citing the fact that it completes other reports related to counterterrorism. However, in the IRTPA, Congress recommended that this information be included in the Country Reports on Terrorism. Moreover, while it is possible that other reports produced by State address IRTPA provisions, the antiterrorism assistance report cited by State in its comments does not constitute a governmentwide list of U.S. efforts to address terrorist safe havens, as it does not include the contributions of key agencies such as DOD. We maintain that such a list could assist decisionmakers in assessing progress toward the U.S. goal of denying terrorist safe havens and conducting effective oversight. DHS concurred with our report, noting its acknowledgement of several DHS training efforts to address terrorist safe havens in selected countries. DHS also stated it will continue to support, as appropriate, State and other relevant agency efforts to improve reporting on terrorist safe havens. We are sending copies of this report to DOD, DHS, DOJ, State, Treasury, USAID, the Office of Management and Budget, the National Security Council, and the intelligence community. In addition, the report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report provides information on U.S. efforts to address physical terrorist safe havens since 2005. Specifically, we assess the extent to which (1) the Department of State (State) has identified and assessed terrorist safe havens in its Country Reports on Terrorism and (2) the U.S. government has identified efforts to deny terrorists safe haven consistent with reporting requirements. To address our objectives, we reviewed and analyzed relevant national security strategies, key congressional legislation, and planning documents related to U.S. efforts to address terrorist safe havens. Additionally, we discussed U.S. strategies, programs, and activities related to terrorist safe havens with U.S. officials from the Departments of Defense (DOD), Homeland Security (DHS), Justice (DOJ), State (State), and the Treasury (Treasury); the Office of Management and Budget; the U.S. Agency for International Development (USAID); the National Security Staff; and the intelligence community. We focused on these agencies because they are involved in efforts that may contribute to addressing terrorist safe havens. We also spoke to 13 subject matter experts from academic, governmental, and nongovernmental organizations. We selected experts who met at least four of the following criteria: (1) knowledge of and experience in one or more of the following areas: (a) identification of terrorist safe havens or failed states, (b) factors that contribute to terrorist safe havens, or (c) process of terrorist recruitment or radicalization; (2) knowledge and experience regarding key safe havens; (3) travel to at least one key safe haven country or region; (4) writing and publishing of articles on key safe haven countries, regions, or issues; and (5) knowledge of and experience in government, for-profit organizations, nonprofit organizations, academia, or journalism. To evaluate the extent to which State has identified and assessed terrorist safe havens, we reviewed U.S. agency reports, such as State’s annual Country Reports on Terrorism. Moreover, we evaluated assessments of terrorist safe havens included in the chapter specific to terrorist safe havens in State’s August 2010 Country Reports on Terrorism against criteria recommended, to the extent feasible, by Congress in the Intelligence Reform and Terrorism Prevention Act (IRTPA). To evaluate the assessments, two analysts independently analyzed the terrorist safe havens assessments against details included in IRTPA. Those analysts then discussed and resolved any differences in the results of their analyses; a supervisor reviewed and approved the final results of the analysis. We also interviewed U.S. agency officials to determine the process and criteria used to identify, assess, and prioritize these terrorist safe havens, and spoke with subject matter experts to obtain their views on the characteristics of and threats posed by terrorist safe havens identified by State. To assess the extent to which the U.S. government has identified efforts to deny terrorists safe haven consistent with reporting requirements, we evaluated national counterterrorism and security strategies; agency budget and planning documents, including reports from State’s Foreign Assistance Coordination and Tracking System (FACTS); and agency reports against requirements included in IRTPA. Although we did not independently audit the funding data in the FACTS database, and are not expressing an opinion on them, based on our examination of the documents received and our discussions with cognizant agency officials, we concluded that the FACTS data we obtained were sufficiently reliable for the purposes of this engagement. We also examined country-specific strategies related to addressing terrorist safe havens by interviewing U.S. agency officials and reviewing mission strategic and resource plans (MSRP) for three countries identified as having terrorist safe havens—the Philippines, Somalia, and Yemen. We selected these countries based on consideration of the following criteria: (1) identification of a country or area as a terrorist safe haven by State in its August 2010 Country Reports on Terrorism, (2) priority placed on a particular safe haven as expressed U.S. officials and subject matter experts, (3) consideration of related GAO work, and (4) congressional interest. Our analysis does not include intelligence-related efforts. We also considered information obtained during our previous reviews of U.S. efforts to address terrorist safe havens in Afghanistan, Pakistan, and Iraq. To obtain a more in-depth understanding of specific programs and activities, we traveled to Kenya (where State’s Somalia unit is located) and the Philippines, where we met with U.S. government personnel involved in efforts to address terrorist safe havens in Somalia and the southern Philippines. We planned to travel to Yemen, but were unable to do so due to the unstable security environment during the time of our review. To supplement our understanding of U.S. efforts related to denial of terrorist safe haven in Yemen we spoke with officials based in Washington, D.C., from DOD, State, USAID, and the intelligence community. We compiled our list of U.S. efforts to address terrorist safe haven in the Philippines, Somalia, and Yemen based on: (1) the efforts identified by cognizant U.S. officials as those contributing to addressing terrorist safe havens and (2) programs and activities associated with MSRP goals related to addressing terrorist safe havens. Programs and activities identified are meant to serve as examples of U.S. efforts that may contribute to addressing terrorist safe havens not as an exhaustive list of efforts to address terrorist safe havens. We conducted this performance audit from September 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Terrorist Safe Haven Assessments as Included in State’s August 2010 Country Reports on Terrorism “The Government of Afghanistan, in concert with the International Security Assistance Force and the international community, continued its efforts to eliminate terrorist safe havens and build security, particularly in the country’s south and east where the main Taliban based insurgents threatened stability. Many insurgent groups, including Taliban elements, the Haqqani Network, Hezb-e-Islami Gulbuddin, al-Qa’ida (AQ), and Lashkar-e- Tayyiba, continued to use territory across the border in Pakistan as a base from which to plot and launch attacks within Afghanistan and beyond. Narcotics trafficking, poppy cultivation, and criminal networks were particularly prevalent, constituting a significant source of funding for the insurgency as well as fueling corruption within Afghanistan. AQ leadership in Pakistan maintained its support to militants conducting attacks in Afghanistan and provided funding, training, and personnel to facilitate terrorist and insurgent operations. Anti-Coalition organizations continued to operate in coordination with AQ, Taliban, and other insurgent groups, primarily in the east.” “Columbia’s borders with Venezuela, Ecuador, Peru, Panama, and Brazil include rough terrain and dense forest cover. These conditions, coupled with low population densities and historically weak government presence, create potential safe havens for insurgent and terrorist groups, particularly the Revolutionary Armed Forces of Colombia (FARC). The FARC, retreating in the face of Colombian military pressures, thus operated with relative ease along the fringes of Colombia’s borders, and also uses areas in neighboring countries along the border to rest and regroup, procure supplies, and stage and train for terrorist attacks with varying degrees of success. The FARC elements in these border regions often engaged the local population in direct and indirect ways, including recruitment and logistical assistance. This appeared to be less so in Brazil and Peru where potential safe havens were addressed by stronger government responses. Ecuador and Panama have responded with a mix of containment and non-confrontation with Colombian narco-terrorist groups, although some confrontations do occur depending on local decisions and cross-border relations.” “Iraq was not a terrorist safe haven in 2009, but terrorists, including Sunni groups like al- Qa’ida in Iraq (AQI), and Ansar al-Islam (AI), as well as Shia extremists and other groups, viewed Iraq as a potential safe haven. Together, U. S. and Iraqi security forces continued to make progress against these groups. The significant reduction in the number of security incidents in Iraq that began in the last half of 2007 continued through 2009, with a steady downward trend in numbers of civilian casualties, enemy attacks, and improvised explosive device (IED) attacks. AQI, although still dangerous, experienced the defection of members, lost key mobilization areas, suffered disruption of support infrastructure and funding, and was forced to change targeting priorities. A number of factors have contributed to the substantial degradation of AQI. The alliance of convenience and mutual exploitation between AQI and many Sunni populations has deteriorated. The Baghdad Security Plan, initiated in February 2007, along with assistance from primarily Sunni tribal and local groups, has succeeded in reducing violence to late 2005 levels and disrupted and diminished AQI infrastructure, driving some surviving AQI fighters from Baghdad and Anbar into the northern Iraqi provinces of Ninawa, Diyala, and Salah ad Din. New initiatives with tribal and local leaders in Iraq have led Sunni tribes and local citizens to reject AQI and its extremist ideology. The continued growth, professionalism, and improved capabilities of the Iraqi forces have increased their effectiveness in rooting out terrorist cells. Iraqis in Baghdad, Anbar and Diyala Provinces, and elsewhere have turned against AQI and were cooperating with the Iraqi government and Coalition Forces to defeat it.” “The Kurdistan Workers’ Party (PKK) maintained an active presence in northern Iraq, from which it coordinated attacks into Turkey, primarily against Turkish security forces, local officials and villagers who opposed the organization. In October, the Turkish Parliament overwhelmingly voted to extend the authorization for cross-border military operations against PKK encampments in northern Iraq. Iraq, Turkey, and the United States continued their formal trilateral security dialogue as one element of ongoing cooperative efforts to counter the PKK. Iraqi leaders, including those from the Kurdistan Regional Government, continued to publicly state that the PKK was a terrorist organization that would not be tolerated in Iraq. Turkish and Iraqi leaders signed a counterterrorism agreement in October.” “Hizballah remained the most prominent and powerful terrorist group in Lebanon, with deep roots among Lebanon’s large Shia community, which comprises at least one third of Lebanon’s population. The Lebanese government continued to recognize Hizballah, a U.S.- designated Foreign Terrorist Organization, as a legitimate “resistance group” and political party. Hizballah maintained offices in Beirut and military-style bases elsewhere in the country and was represented by elected deputies in parliament. AQ associated extremists also operated within the country, though their presence was small compared to that of Palestinian groups operating in Palestinian refugee camps who were not aligned with AQ. The camps are officially controlled by the Lebanese government. While the Lebanese Armed Forces do not have a day-to-day presence in the camps, they have at times conducted operations in the camps to combat terrorist threats.” “Despite increased efforts by Pakistani security forces, al-Qa’ida (AQ) terrorists, Afghan militants, foreign insurgents, and Pakistani militants continued to find safe haven in portions of Pakistan’s Federally Administered Tribal Areas (FATA), North-West Frontier Province (NWFP), and Baluchistan. AQ and other groups such as the Haqqani Network used the FATA to launch attacks in Afghanistan, plan operations worldwide, train, recruit, and disseminate propaganda. The Pakistani Taliban (under the umbrella moniker Tehrik- e-Taliban or TTP) also used the FATA to plan attacks against the civilian and military targets across Pakistan. Outside the FATA, the Quetta-based Afghan Taliban and separate insurgent organizations such as Hizb-e-Islami Gulbuddin used the areas in Baluchistan and the NWFP for safe haven. Islamist Deobandi groups and many local tribesmen in the FATA and the NWFP continued to resist the government’s efforts to improve governance and administrative control. Despite the August death of the Pakistani Taliban’s leader Baitullah Mehsud and Pakistani military operations throughout the FATA and NWFP, the Pakistani Taliban, AQ, and other extremist groups remained dangerous foes to Pakistan and the international community. Despite international condemnation for its November 2008 attacks in Mumbai, Lashkar-e-Tayyiba (LT) continued to plan regional operations from within Pakistan. LT is an extremely capable terrorist organization with a sophisticated regional network. It continued to view American interests as legitimate targets. While the Government of Pakistan has banned LT, it needs to take further action against this group and its front organizations, which find safe haven within Pakistan.” “A small number of al-Qa’ida (AQ) operatives remained in East Africa, particularly Somalia, where they posed a serious threat to U.S. and allied interests in the region. These elements were disrupted in late 2006 and early 2007 as a result of Ethiopian military actions and again by the death of AQ operative Saleh Nabhan in September 2009. Somalia remained a concern given the country’s long, unguarded coastline, porous borders, continued political instability, and proximity to the Arabian Peninsula, all of which provide opportunities for terrorist transit and/or safe haven and increased the regional threat level. AQ remains likely to make common cause with Somali extremists, most notably al-Shabaab. Al-Shabaab has expanded its area of control during its protracted insurgency against the Transitional Federal Government and particularly since the withdrawal of Ethiopian forces in early 2009. The group controlled most of southern Somalia at year’s end.” “Terrorist operatives have sought safe haven in areas of the southern Philippines, specifically in the Sulu archipelago and Mindanao. Philippine government control and the rule of law in this area is weak due to rugged terrain, poverty, and local Muslim minority resentment of central governmental policies. In addition to Jemaah Islamiya (JI) fugitives and Abu Sayyaf Group (ASG) terrorists, the New People’s Army and Rajah Solaiman Movement also operated in the southern Philippines.” “In Southeast Asia, the terrorist organizations Jemaah Islamiya (JI) and Abu Sayyaf Group (ASG) have sought safe haven in the vicinity of the Sulawesi Sea and the Sulu Archipelago, which encompasses the maritime boundaries of Indonesia, Malaysia, and the Philippines. The area’s thousands of islands make it a difficult region for authorities to monitor, while a range of licit and illicit activities that occur there – worker migration, tourism, and trade, for example – pose another challenge to identifying and countering the terrorist threat. Although Indonesia, Malaysia, and the Philippines have improved their efforts to control their shared maritime boundaries, the expanse nevertheless remains difficult to control. Surveillance is improved but remains partial at best, and traditional smuggling and piracy groups have provided an effective cover for terrorist activities, such as movement of personnel, equipment, and funds.” Trans-Sahara (Algeria, Mali, Mauritania, and Niger) “The primary terrorist threat in this region was al-Qa’ida in the Islamic Maghreb (AQIM). AQIM was based primarily in northeastern Algeria but factions also operated from a safe haven in northern Mali, from which they transited areas of the Maghreb and Sahel, especially Mali, Niger, and Mauritania. AQIM continued to conduct small scale ambushes and attacks on Algerian security forces in northeastern Algeria, but in 2009 the group was not able to conduct the “spectacular” attacks that were more common a few years ago such as their bombing of the UN and Algerian government buildings. AQIM factions in northern Mali used the safe haven to conduct kidnappings for ransom and murder of Western hostages and to conduct limited attacks on Malian and Mauritanian security personnel. AQIM derived financial support from the ransoms it collected, which were used to sustain the organization and plan further terrorist operations. AQIM routinely demanded the release of their operatives in custody in the region and elsewhere as a condition of release of hostages. Regional governments sought to take steps to counter AQIM operations, but there was a need for foreign assistance in the form of law enforcement and military capacity building in order to do so.” Tri-Border Area (Argentina, Paraguay, and Brazil) “No corroborated information showed that Hizballah, HAMAS, or other Islamic extremist groups used the Tri-Border Area (TBA) for military-type training or planning of terrorist operations, but the United States remained concerned that these groups use the region as a safe haven to raise funds. Suspected supporters of Islamic terrorist groups, including Hizballah, take advantage of loosely regulated territory and the proximity of Ciudad del Este, Paraguay and Foz do Iguaçu, Brazil to participate in a wide range of illicit activities and to solicit donations from within the sizable Muslim communities in the region. The Argentine, Brazilian, and Paraguayan governments have long been concerned with arms and drugs smuggling, document fraud, money laundering, trafficking in persons, and the manufacture and movement of contraband goods through the TBA. Concerns about the region moved the three governments to invite the United States to participate in the Three Plus One Group on Tri-Border Area Security, which focuses on practical steps to strengthen financial and border controls and enhance law enforcement and intelligence sharing. Brazil, Argentina, and Paraguay have made notable strides in launching initiatives to strengthen law enforcement institutions and cooperation, including developing financial intelligence units, broadening border security cooperation, augmenting information sharing among prosecutors responsible for counterterrorism cases, and establishing trade transparency units.” “Corruption within the Venezuelan government and military, ideological ties with the FARC, and weak international counternarcotics cooperation have fueled a permissive operating environment for narco-traffickers. Other than some limited activities, such as the bombing of remote dirt airstrips on the border, there is little evidence that the government of Venezuela is moving to improve this situation in the near future. The FARC, as well as Colombia’s second largest rebel group, the National Liberation Army (ELN), regularly used Venezuelan territory to rest and regroup, engage in narcotics trafficking, as well as to extort protection money and kidnap Venezuelans to finance their operations.” “The security situation in Yemen continued to deteriorate. As Saudi security forces have clamped down on terrorism and foreign fighters have returned from Afghanistan and Pakistan, Yemen’s porous borders have allowed many terrorists to seek safe haven within Yemen. Al-Qa’ida in Yemen (AQY) announced its merger with al-Qa’ida (AQ) elements in Saudi Arabia in January 2009, creating al-Qa’ida in the Arabian Peninsula (AQAP). The creation of AQAP coincided with fewer attacks within Yemen, possibly due to the desire of its leadership to use Yemen as a safe haven for planning of future attacks and recruitment because the central government lacks a strong presence in much of the country. The absence of effective counterterrorism legislation contributed to Yemen’s appeal as a safe haven and potential base of operations for terrorists. The Yemeni government’s response to the terrorist threat was intermittent, and its ability to pursue and prosecute suspected terrorists remained weak for most of the year due to a number of shortcomings, including the stalling of draft counterterrorism in Parliament. The government’s response improved dramatically in December with security forces taking strong action against a number of terrorist cells. Even with this turn of events, the government was often distracted by the “Sixth War” of the Houthi rebellion in the Sa’ada governorate in the north of the country and political unrest in southern Yemen.” We spoke with 13 subject matter experts with knowledge related to terrorist safe havens. We asked these experts to determine which five terrorist safe havens identified in the State’s August 2010 Country Reports on Terrorism posed the greatest risk to U.S. national security (see table 6). Although included in State’s August 2010 report, none of our experts identified Colombia’s border region, northern Iraq, the southern Philippines, the Sulu/Sulawesi Seas Littoral, the Tri-Border area, and Venezuela as among the top five terrorist safe havens posing the greatest risk to U.S. national security. Profiles on the Philippines, Somalia, and Yemen can be found on the following pages. Economy: The Philippines’ 2010 gross domestic product was estimated at about $353 billion, which represented a 7.3 percent growth rate that year. This growth was spurred by consumer demand, exports, and investments; yet because of high population growth rate and unequal distribution of income, poverty worsened. U.S. strategy in the Philippines combines security and development assistance to address several policy objectives, including counterterrorism, economic growth, and the development of responsive democratic institutions. To address the terrorist groups that find safe haven on the islands of Mindanao and the Sulu Archipelago, the United States has deployed military personnel to train and assist the Philippine armed forces and to engage in civil-military operations to change the conditions that allow terrorist safe havens. U.S. assistance to the Philippines has been more than $120 million in each of the past three years, and $135 million has been requested for fiscal year 2011. About 60 percent of this assistance has supported development programs in Muslim areas of Mindanao and the Sulu Archipelago with the aim of reducing the economic and political conditions that foster extremist ideologies and activities. U.S. military assistance is aimed primarily at Muslim insurgents and has supported intelligence gathering, operations planning, and communications support; supplied modern equipment; and provided U.S. special operations advisors to assist two Philippine Regional Combatant Commands in Mindanao and the Sulu Archipelago. Cognizant U.S. officials and agency reports note several challenges to addressing terrorist safe havens in the Philippines, including lawlessness, corruption, and poor economic conditions. Lawlessness in the southern Philippines: According to State’s Country Reports on Terrorism 2009, Philippine government control and the rule of law are weak due to rugged terrain, poverty, and local Muslim minority resentment of central government policies. Corruption of local leaders and police: Officials told us that corruption, as well as the limited capacity of the Philippine police, is a major challenge to denying terrorists safe haven. Corruption is rampant in the Philippine police, the group that implements law enforcement approaches to denying safe haven. Poor economic conditions: Officials noted that poor economic conditions in the Philippines contribute to an environment that allows terrorist groups to increase recruitment. Economic development programs are essential to reduce the conditions that allow for terrorists to build safe havens in the Philippines. violence, such as crime, rioting, or tribl violence. U.S. strategy in Somalia is described as “dual-track”—providing continued support to the Transitional Federal Government (TFG) of Somalia and also recognizing the potential role of other actors in ending conflict and establishing basic governing institutions. Efforts include, among other things, degrading the abilities of al-Shabaab—a designated foreign terrorist organization based in Somalia—and increasing the capacity of the TFG while also increasing engagement and support for Somaliland, Puntland, and local administrative entities and civil society groups. The administration has requested almost $85 million for State and the U.S. Agency for International Development assistance for fiscal year 2011 to continue conflict mitigation, governance, and economic growth programs in Somalia. In addition, the Partnership for Regional East African Counterterrorism is the current State strategy for long-term engagement and capacity building in East Africa to combat evolving terrorism threats in, and emanating from, the Horn of Africa and along the Swahili Coast. The Partnership for Regional East African Counterterrorism aims to, among other things, contain and reduce the operational capacity of terrorist networks in Somalia; deter and reduce the appeal of and support for violent extremism across East Africa; and improve and expand border security in East Africa, particularly around Somalia. U.S. officials and agency reports note several challenges to addressing terrorist safe havens in Somalia, including limited access to the country as well as the lack of a central government. Limited access: U.S. officials told us that because the United States does not have an embassy in Somalia and few personnel are allowed to travel there for safety reasons, implementing programs in the country is complicated. For example, the absence of U.S. diplomatic presence makes monitoring the implementation of security assistance activities difficult. Lack of a central government: Officials stated that a lack of a central government in Somalia limits the number of credible partners with which U.S. agencies can work to implement assistance programs. According to officials, this void legally constrains agencies’ ability to use resources from some security assistance programs, such as Global Train and Equip “Section 1206” and Foreign Military Financing, to undertake assistance activities in Somalia. According to the National Counterterrorism Center (NCTC), al- Shabaab is not monolithic in its goals. State reports that many rank and file members of al-Shabaab are interested in issues within Somalia, rather than pursuing a global agenda. However, NCTC and State note that members of al-Shabaab’s core leadership is linked ideologically to al Qaeda and that some members of the group previously trained and fought with al Qaeda in Afghanistan. Government: Yemen is a republic with a legal system based on Islamic law, Turkish law, English common law, and local tribal customary law. Ali Abdullah Saleh, who served as the president of the Yemen Arab Republic (North Yemen) from 1978 to 1990, has been the president of Yemen since May 1990. U.S. strategy in Yemen, as articulated by the White House, takes a comprehensive approach, including both security assistance to counter al Qaeda in the Arabian Peninsula (AQAP) and development assistance to address the environment that allows AQAP to exist. According to State testimony, this strategy has two parts: (1) strengthening the Yemeni government’s ability to promote security and minimize the threat from violent extremists within its borders, and (2) mitigating Yemen’s economic crisis and deficiencies in government capacity, provision of services, transparency, and adherence to the rule of law. The President’s fiscal year 2011 budget requests $106 million for Yemen. The United States is also engaged with international partners to provide assistance to Yemen. In 2006, an international donors’ conference in London pledged $5.2 billion for Yemen, although, according to State, a significant portion of this funding has yet to be provided. At a Friends of Yemen (an international coordination group) meeting in September 2010, the international community called for the creation of a development fund for Yemen and more coordination of international aid. the Department of Justice’s (DOJ) Overseas and Prosecutorial Development Assistance and Training program. Cognizant U.S. officials and agency reports note several challenges to addressing terrorist safe havens in Yemen, including the limited capacity of Yemeni security forces, inconsistent cooperation of the Yemeni government, and instability in Yemen. Limited capacity of Yemeni security forces: Officials noted that Yemeni security forces have limited, but improving, capacity. This creates a problem for addressing terrorist safe havens, according to officials, because it limits the ability of the Yemeni government to control territory that AQAP may want to use as a safe haven. Inconsistent cooperation with the government of Yemen: According to State’s August 2010 Country Reports on Terrorism, the Yemeni government’s response to terrorism was intermittent. The report also cites the absence of effective counterterrorism legislation as contributing to Yemen’s appeal as a safe haven for terrorists. Instability in Yemen: Officials told us that instability in Yemen creates challenges to addressing safe haven. Specifically, they cited unstable conditions in northern and southern Yemen and political unrest resulting from the 2011 uprisings against President Saleh’s rule. We identified nine examples of State-funded efforts in the Philippines, four examples in Somalia, and nine examples in Yemen not included in State’s August 2010 Country Reports on Terrorism that may contribute to addressing terrorist safe havens. We compiled our list of U.S. efforts to address terrorist safe haven in the Philippines, Somalia, and Yemen based on: (1) the efforts identified by cognizant U.S. officials as those contributing to addressing terrorist safe havens and (2) programs and activities associated with MSRP goals related to addressing terrorist safe havens. Table 7 describes U.S. efforts funded by State to address terrorist safe havens as identified by agency officials or MSRPs for the Philippines, Somalia, and Yemen and indicates which of these efforts were included in State’s August 2010 report. In addition to the individual named above, Jason Bair, Assistant Director; Christy Bilardo; Kathryn Bolduc; Lynn Cothern; Martin de Alteriis; Mary Moutsos; Elizabeth Repko; and Celia Thomas made key contributions to this report. Tonita Gillich, Julia Jebo, Eileen Larence, Heather Latta, Marie Mak, Sarah McGrath, John Pendleton, Nina Pfeiffer, and Jena Sinkfeld provided additional support.
|
Denying safe haven to terrorists has been a key national security concern since 2002. Safe havens allow terrorists to train recruits and plan operations against the United States and its interests across the globe. As a result, Congress has required agencies to provide detailed information regarding U.S. efforts to address terrorist safe havens. In this review, GAO assesses the extent to which (1) the Department of State (State) has identified and assessed terrorist safe havens in its Country Reports on Terrorism and (2) the U.S. government has identified efforts to deny terrorists safe haven consistent with reporting requirements. To address these objectives, GAO interviewed U.S. officials and analyzed national security strategies; State reporting; and country-level plans for the Philippines, Somalia, and Yemen. State identifies existing terrorist safe havens in its annual "Country Reports on Terrorism" but does not assess them with the level of detail recommended by Congress. The Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) requires State to include in its annual "Country Reports on Terrorism" a detailed assessment of each foreign country used as a terrorist safe haven. It also recommends that State include, to the extent feasible, details in the report such as actions taken to address terrorist activities by countries whose territory is used as a safe haven. Since 2006, State has identified terrorist safe havens in its "Country Reports on Terrorism." In August 2010, State identified 13 terrorist safe havens, including the southern Philippines, Somalia, and Yemen. However, none of the assessments in State's August 2010 report included information on one of the four elements recommended by Congress--the actions taken by countries identified as having terrorist safe havens to prevent trafficking in weapons of mass destruction through their territories. Also, about a quarter of the assessments in State's August 2010 Country Reports on Terrorism lacked information on another element recommended by Congress--the actions taken by countries identified as terrorist safe havens to cooperate with U.S. antiterrorism efforts. Including this information in State's reports could help better inform congressional oversight related to terrorist safe havens. The U.S. government has not fully addressed reporting requirements to identify U.S. efforts to deny safe haven to terrorists. In IRTPA and the National Defense Authorization Act for fiscal year 2010, Congress required the President to submit reports identifying such efforts. State responded to IRTPA with a 2006 report and subsequent annual updates to its "Country Reports on Terrorism." However, efforts identified in State's August 2010 report include only certain efforts funded by State and do not include some State and other U.S. government agency funded efforts, such as those of the Departments of Defense and Justice. For example, our discussions with agency officials and analysis of agency strategic documents identified at least 14 programs and activities not included in State's reporting that may contribute to denying terrorists safe haven in Yemen. According to officials from the National Security Staff, the National Security Council is responsible for producing the report required by the National Defense Authorization Act for fiscal year 2010. As of March 2011, the report, which was due in September 2010, was not completed. According to agency officials, compiling such a list is challenging because it is difficult to determine if a given activity addresses terrorist safe havens or contributes to different, though possibly related, foreign policy objectives. While we recognize this challenge, a more comprehensive list that includes the efforts of all relevant agencies could provide useful information to Congress to enhance oversight activities, such as assessing U.S. efforts toward the governmentwide goal of denying terrorists safe haven. GAO recommends State and the National Security Council (NSC) improve reporting on assessments of and U.S. efforts to address terrorist safe havens. State concurred with our recommendation on assessments. State partially concurred with our recommendation on U.S. efforts to address terrorist safe havens, citing other reports it completes related to counterterrorism. However, the additional reports cited by State do not constitute a governmentwide list of U.S. efforts to address terrorist safe havens. The NSC reviewed our report but did not provide comments on its recommendations.
|
Contracts are generally considered to be physically complete once all option provisions have expired, the contractor has completed performance, and the government has accepted the final delivery of supplies or services. Physically completed contracts should then be closed within time frames set by the FAR—6 months for firm-fixed-priced contracts and 36 months for flexibly-priced contracts. The FAR prohibits the closing of contract files if the contract is in litigation, under appeal, or where the contract is being terminated and termination actions have not been completed. Flexibly-priced contracts take longer to close because additional steps must be taken during the closeout process; for example, audits on costs incurred and settlement of the contractor’s final indirect cost rates. Contracting officers and DCAA need to ensure that costs incurred by the contractor and charged to the government are allowable, allocable, and reasonable. On flexibly-priced contracts, contracting officers need to establish final indirect cost rates based on the contractor’s incurred costs. Indirect cost rates are a mechanism for establishing the proportion of indirect costs—such as a contractor’s general and administrative expenses—that can be charged to a contract. See figure 1 for the contract closeout process. Federal acquisition regulations require contractors to submit proposals that include information on all of their flexibly-priced contracts in a fiscal year. DCAA uses a checklist to determine whether a proposal is adequate, which, among other items, includes various cost schedules, subcontract information, and information on contracts that would be ready for closeout. DCAA may determine that a contractor’s incurred cost proposal is inadequate for a variety of reasons, such as incomplete or inaccurate information, and request that the contractor revise and resubmit the incurred cost proposal. This process may take several iterations before the proposal is deemed adequate. DCAA categorizes the proposals based on the total value of the proposal, called the auditable dollar value (ADV), which is the sum of all of the costs on flexibly-priced contracts for that contractor during the fiscal year. Figure 2 depicts key steps in the incurred cost audit process. There is not a one-to-one relationship between an incurred cost audit and an individual contract. In a single fiscal year, a contractor may incur costs on multiple flexibly-priced contracts, and all of these contracts would be included in the contractor’s proposal. Further, since the period of performance on an individual contract may span several years, an audit of each of the contractor’s incurred cost proposals for those years needs to be conducted to provide the information necessary to close one flexibly-priced contract. Our prior work has highlighted some challenges at DOD in closing out contracts, as well as challenges at DCAA regarding incurred cost audits. For example, In September 2009, we reported on problems with DCAA’s audit quality and recommended DCAA improve audit quality guidance and develop a risk-based audit approach. DOD and DCAA generally agreed with the recommendations. As a result, DCAA required more testing and stricter compliance with government auditing standards that added staff time to complete audits. Additionally, as DCAA’s workload increased and resources remained relatively constant, auditors prioritized time-sensitive activities, such as audits to support new awards, and incurred cost audits were not completed, creating a backlog. In September 2011, we found that DOD’s ability to close the contracts it awarded to support efforts in Iraq and Afghanistan was hindered by several factors, including limited visibility into over-age contracts and DCAA workforce shortfalls. DOD concurred with our recommendations and revised its guidance on contract closeout in a contingency environment to require regular monitoring and assessment of the progress of closeout activities. In November 2011, we found that DCMA faced workforce challenges that caused delays in conducting timely quality assurance, audits of contractor processes, and contract closeout activities. Additionally, we found that DCAA had workforce challenges that affected its ability to conduct business system audits. We recommended that DCMA and DCAA identify and execute options to assist with audits and improve transparency of the current status of contractor business systems. DOD generally concurred with the recommendations and has initiated some actions to address them. In December 2012, we found that DOD components—including the Army, Navy, and Air Force—did not prioritize contract closeout and had limited data on the extent of their contract closeout backlog. We also reported that DOD’s efforts to close its large, flexibly-priced contracts were hindered by the backlog of DCAA’s incurred cost proposals. We recommended that DOD components establish baseline data and performance measures related to contract closeout. DOD concurred with our recommendations and the components have since established performance measures on contract closeout. The challenges faced by DOD in closing out contracts are not recent. In 2001, the DOD Inspector General issued a report that found weaknesses in the closeout process, including inadequate monitoring of contracts that could be closed, inattention to closeout requirements, and erroneous data about contracts available for closeout. Further, challenges in closing contracts are not exclusive to DOD. In recent years, the Inspectors General at several federal agencies, including the National Aeronautics and Space Administration (NASA), State, and the Department of Transportation, among others—have reported on the issues and challenges related to contract closeout. For example, In February 2014, the NASA Inspector General found that delays in closing contracts were due to the workload at DCAA, that some funds were not being deobligated in a timely manner, and that the closeout process was not uniform across the agency. For example, the Inspector General reported that contract personnel at the various NASA offices used different guidance when closing out contracts, which impaired their ability to share information and work across the agency; In November 2014, the Inspector General at State found that the agency did not have systems in place for tracking contingency contracts in Afghanistan nearing completion or which had funds that were expired or were available for deobligation. The Inspector General further found that State had not established comprehensive procedural guidance for contract closeout or ensured existing guidance was accurate for these contingency contracts in Afghanistan. According to State acquisition officials, State revised its policies and guidance regarding contract closeout. For example, the Foreign Affairs Handbook was updated to include procedures on how to address common difficulties in closing contracts; and In July 2015, the Inspector General at the Department of Transportation found that the agency had not implemented oversight procedures or performance measures on contract closeout to assess whether the components were complying with closeout requirements. The five agencies and selected components we reviewed varied widely in ensuring that contracts were closed within the time frames prescribed by federal acquisition regulations. None of the five agencies we reviewed had all of the following: (1) centralized data on the number of contracts needed to be closed out; (2) information on where the contracts were in the closeout process; (3) established agency-wide contract closeout- related goals; and (4) established performance measures to assess progress toward achieving these goals. Most agencies delegated responsibility for contract administration, including closing out contracts, to their components. We found that some components within these agencies had at least three of these elements. For example, DCMA, which manages contract closeout for contracts that have been delegated to it, had each of these elements in place, while the Air Force, Navy, and Army each had contract closeout data and had established goals and performance measures, but lacked data on where contracts were in the closeout process. DHS also had information on the number of contracts eligible or overdue for closeout and had initiatives underway to reduce the number of low-risk, firm-fixed-priced contracts but did not have initiatives for higher risk contracts, including those involving flexibly-priced contracts. According to the federal standards for internal control, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. To help meet the FAR expected time frames for closing out contracts, using data on the number of firm-fixed-priced and flexibly-priced contracts that are eligible or overdue for closeout and where contracts are in the closeout process can help agencies identify where additional management attention is needed in order to close the contracts. Further, establishing goals and performance measures to assess progress toward achieving these goals can be an important tool in demonstrating leadership commitment. Having information on the scope of the issue and identifying the challenges to closing contracts could help agency-level management tailor approaches to specifically address causes as to why contracts remain open, notably if they are similar across the various components within the agency. In addition, establishing goals and performance measures ensure that sufficient management attention is paid to contract closeout. As shown in table 1, agencies varied in having each of these elements. A recurring issue highlighted in our prior work, as well as in this review, is that contract closeout was not a priority for either agency management or contracting officers. Agency officials and contracting officers noted the focus for contracting officers is to award contracts for the goods and services needed to support agency operations and missions, and that closing out contracts is largely viewed as an administrative task that staff get to when time is available. Further, agency acquisitions officials we spoke with on this review noted that their ability to focus attention on contract closeout was affected by resource constraints, including workforce challenges and sequestration. At the agency level, DOD has focused management attention on contract closeout, but does not have agency-wide data in place and does not have insight into the components’ goals and performance measures. In September 2014, Defense Procurement and Acquisition Policy established the Contract Closeout Working Group to improve and streamline the contract closeout process, including policy revisions and technology updates to its systems. DOD officials noted that while the department has limited insight as to the total number and value of contracts needing to be closed, it is the responsibility of the components and contracting offices to manage contract administration, including closeout. According to the federal standards for internal control, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks—such as identifying improper payments and utilizing unspent funds elsewhere. DOD does not have the ability to track contract closeout centrally, and the components use a number of different contract management systems. The lack of insight into contracts that need to be closed, including where they are in the closeout process, hinders DOD’s oversight and ability to develop targeted approaches to address the causes as to why contracts remain open—especially if there are similar issues across the agency and could make it difficult to identify areas that may need improvement. Further, without DOD oversight and monitoring of the performance measures set at the component level, DOD will not be able to assess agency-wide progress in managing contract closeout. At the component level, we found that each of the DOD components we reviewed had data on the number of contracts to be closed and had goals and performance measures in place. However, with the exception of DCMA, the components were unable to track—and therefore address— challenges as to where contracts were in the closeout process, such as how many contracts were awaiting DCAA audits or needed action to be taken by their contracting staff. For example, Starting in 2013, the Army established a contract closeout task force in an effort to reduce over-age contracts. In fiscal year 2015, the Army set an overall goal of getting its over-age contracts down to 70 percent or below of the total contracts due for closeout. Further, the Army has established specific percentage goals for its contracting activities. According to the Army’s Contracting Enterprise Review for the third quarter of fiscal year 2017, only one out of the Army’s five contracting activities was on track to meet its fiscal year 2017 goal. Overall, the Army reported that it had a total of 231,627 firm-fixed- priced and flexibly-priced over-age contracts, which constituted about 86 percent of the total contracts due for closeout. The Army does not have the data broken out by contract-type. In January 2016, the Navy established a goal for each of its contracting activities of reducing the number of over-age contracts by 5 percent from the end of 2016 and 10 percent cumulatively for 2017. In May 2017, the Navy conducted its first annual review on over-age contracts with its senior leadership and reported that 6 of the 10 contracting activities met the 2016 goal. Overall, the Navy reported it had a total of 74,453 firm-fixed-priced and 10,637 flexibly-priced over- age contracts as of December 2016 across its 10 contracting activities, including the Marine Corps. In 2015, the Air Force established a goal to eliminate its over-age contracts by fiscal year 2020. To do so, the Air Force established a goal for fiscal year 2016 of reducing the number of contracts needing to be closed out by 10 percent; in fiscal year 2017, the goal rose to 20 percent. The Air Force reported that for fiscal year 2016, the 10 percent reduction goal was met for firm-fixed-price contracts. For flexibly-priced contracts, however, the Air Force reports two categories of contracts—“cost” and “other”—and reported that it had a percentage increase of over-age “cost” contracts from 69 percent to 72 percent and an increase for “other” contracts from 82 percent to 85 percent. As of June 2017, the Air Force reported 33,844 firm-fixed- priced and a combined total of 21,036 flexibly-priced contracts due for closeout across its 15 contracting activities. For firm-fixed-priced contracts, DCMA established a goal of reducing its over-age contracts by 50 percent in fiscal year 2016, which it met. DCMA further established a goal that it would have no more than 869 over-age firm-fixed-priced contracts after fiscal year 2017 and no more than 350 after fiscal year 2018. For flexibly-priced contracts, DCMA established a goal from fiscal year 2016 through fiscal year 2020 to reduce its over-age flexibly-priced contracts by an additional 20 percent each year. As of March 2017, DCMA reported that it had 70,322 firm-fixed-priced and flexibly-priced over-age contracts and is generally on track to meeting its goals for both firm-fixed-priced and flexibly-priced contracts as of May 2017. DLA runs monthly reports of contracts that are recommended for closeout, and, in January 2017, DLA acquisition officials reported that 784 firm-fixed-priced contracts were recommended for closeout. Most of DLA’s contracts are firm-fixed-priced contracts, with few flexibly- priced contracts. DLA has instituted a shorter goal for closing out a firm-fixed-priced contract—within 120 days—as opposed to the FAR time frames of 180 days of contract completion. DLA officials stated that the agency consistently meets this goal, having approximately 99 percent of its contracts closed within the shorter time frame. According to DLA acquisition officials, management attention over the last 2 years resulted in a reduction of the number of contracts in the backlog, with the intention of preventing another backlog from reoccurring. DOD officials noted that DOD has several department-wide initiatives to help components address their contract closeout backlogs. For example, in December 2013, DOD implemented a policy change, increasing the obligation threshold from $150,000 to $500,000 for contracts that could qualify for automatic closeout. To qualify, contracts must be under $500,000, firm-fixed-priced, and not have certain contract clauses that require contracting officers to take action, such as patents. For contracts meeting these criteria, DOD systems automatically closed these contracts. In August 2015, DOD added a contract closeout module in one of its data systems to identify and automatically close contracts that were not covered by other automated closeout processes. This initiative leverages implementation of the Procurement Data Standard across the various DOD contract writing systems to improve visibility and accuracy of contract-related data needed to determine whether automated closeout can occur. DOD reported that over 12,000 contracts were closed in fiscal year 2016 across the department using the new module. DOD is also working to ensure that contracts closed in its contract writing systems, for example the Standard Procurement System, are reflected as closed in other data systems, such as Electronic Document Access. Further, DOD awarded a contract with the AbilityOne program in 2010 for contract closeout support services. The AbilityOne program provides career opportunities for people who are blind or have severe disabilities, including service-disabled veterans. The program also trains and employs wounded veterans to support contract closeout activities. In September 2015, DOD awarded a follow-on 5-year contract to AbilityOne to provide continued contract closeout support services. The contract has a not-to- exceed value of $75 million. DOD officials stated that since it started, more than 317,000 contracts—across the various DOD components— would be closed through the AbilityOne contract as of May 2017. Department of Health and Human Services HHS management does not have information on the extent the agency has contracts due for closeout or where contracts are in the closeout process. Having such information could help the agency in its oversight of contract closeout by identifying and addressing the causes as to why contracts remain open—such as determining if the issues affecting contract closeout are similar across the components. According to HHS acquisition officials, this information is managed at the component-level. While there is value in components tracking and managing their own progress, HHS will not be able to assess agency-wide progress in managing contract closeout without oversight and monitoring of the performance measures set at the component level. According to the federal standards for internal control, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks—such as recovering improper payments or identifying unspent funds for use elsewhere. While various HHS components reported that they are taking various actions to address contract closeout, such as establishing goals and performance measures to track their progress in closing contracts, we focused on CMS—which accounted for about 30 percent of contract dollars in fiscal year 2015 for HHS. CMS tracks the number and type of contracts that are overdue for closeout. According to senior CMS acquisition officials, in October 2014 they established a closeout goal of closing 2,250 contracts per year. In fiscal year 2016, CMS surpassed this goal and closed 2,831. As of June 2017, CMS had already met this goal for fiscal year 2017, closing 2,653 contracts and reported that it had 2,244 firm-fixed-priced and 2,867 flexibly-priced over-age contracts that still needed to be closed. In addition, CMS also issues monthly reports on contracts due for closeout that are shared among the various CMS offices. CMS acquisition officials stated that having management-level attention had a positive effect in identifying and addressing issues affecting contract closeout. Our review found that DHS management made a commitment to address contract closeout challenges, gained insight into the extent of its contract closeout backlog, and has initiatives underway to address at least a portion of its contract closeout backlog. It has not, however, established goals and performance measures to assess its progress in reducing its contract closeout backlog. To respond to a material weakness identified in the agency’s 2014 annual Financial Report, senior DHS management initiated an effort in March 2016 to identify the extent of the department’s contract closeout backlog. This effort, jointly led by the Chief Procurement Officer and the Chief Financial Officer, used DHS’s contract reporting system to pull data from FPDS-NG and identify contracts that had a period of performance end date that had elapsed beyond 6 months for firm-fixed-priced contracts and beyond 36 months for flexibly-priced contracts. According to a March 2016 memorandum, DHS estimated that it had approximately 382,000 over-age contracts—those that were beyond the FAR set time frames for closeout. DHS determined that 352,000 (about 92 percent) of the over- age contracts were considered “low-risk” contracts—contracts awarded using simplified acquisition procedures or firm-fixed price contracts with the remaining 30,000 being flexibly-priced contracts. DHS has ongoing efforts to address over a quarter of its low-risk, firm- fixed-priced contracts. From the list of 352,000 low-risk contracts, DHS financial management officials identified 5,695 over-age firm-fixed-priced contracts with unliquidated obligations of $50,000 or less. DHS officials then verified the list of these contracts with their components’ acquisition and financial management staff to verify the unliquidated obligation amounts and confirm that the contracts were ready for closeout. DHS published the list of verified contracts in a October 2016 Federal Register notice, requested contractors to submit any outstanding invoices associated with these contracts within 60 days of publication, and, if it did not receive any outstanding invoices, indicated that it planned to close the contracts. DHS acquisition officials estimated that by August 2017 about 100,000 (about 28 percent) low-risk, over-age contracts would be closed through this effort. According to the DHS officials, the initiative was focused on closing older contracts where the funds have already expired and did not collect information on the amount of funds that were deobligated off of these over-age contracts. Further, DHS acquisition officials stated that they received feedback from DHS components that the effort was helpful in reducing some of the administrative paperwork that allowed them to close these low-risk contracts. DHS officials also stated that in fiscal year 2015, they implemented a separate initiative to address unliquidated obligations that targeted contracts in each DHS component with the highest amount of unliquidated obligations, which resulted in a deobligation of $164 million from those contracts. According to the DHS officials, they also have several efforts related to flexibly-priced contracts, including efforts to coordinate with DCAA on the status of audits, developing a tool to track the audits and providing additional guidance and training to close the contracts. DHS has not, however, established goals and performance measures to assess the department’s overall progress in reducing the total number of firm-fixed-priced or flexibly-priced contracts that need to be closed. Further, DHS does not have insight—either at the agency-level or the component-level—as to where these contracts are in the closeout process that would help identify where there are challenges in the process. This hinders the agency’s ability to target its approaches to address the causes as to why contracts remain open and could make it difficult to identify areas that may need improvement. Additionally, without goals and performance measures, DHS officials will not be able to track progress agency-wide on closing contracts over time. According to the federal standards for internal control, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks—such as recovering improper payments or identifying unspent funds for use elsewhere. DOJ management does not have agency-wide information on its contracts that are eligible or overdue for closeout. Having such information could help the agency in its oversight of contract closeout by identifying and addressing the causes as to why contracts remain open. The Deputy Assistant Attorney General for Policy, Management, and Planning, who also serves as DOJ’s Senior Procurement Executive, is responsible for implementing agency-wide procurement policy and other management initiatives. DOJ acquisition officials told us, however, that DOJ is decentralized, and it is up to each bureau to manage contract closeout—including the implementation of policies, monitoring closeout efforts, as well as establishing any goals and performance measures. While there is value in components tracking and managing their own progress, DOJ will not be able to track the department’s overall progress across the agency on closing contracts or determine if the issues affecting contract closeout are similar across the components and address them at an agency-wide level without information on the number and type of contracts that need to be closed, as well as goals and performance measures. According to federal standards for internal control, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks—such as recovering improper payments or identifying unspent funds for use elsewhere. The three DOJ components that we reviewed had varying levels of information on their contract closeout backlog. For example, senior FBI acquisition officials told us that while the FBI has procedures in place for contract closeout and for removing unliquidated obligations, the FBI does not have management-level oversight on contracts that need to be closed. According to the FBI acquisition officials, the FBI does not centrally track contract closeout information and would have to go to individual contracting offices for information. The FBI accounts for about 22 percent of DOJ’s contract dollars. BOP, which accounted for about 36 percent of DOJ’s contract dollars in fiscal year 2015, generally lacked centralized information on the status of contracts needing to be closed out. ATF, which accounted for about 3 percent of DOJ’s contract dollars in fiscal year 2015, identified 58 firm-fixed-priced contracts that had a total of approximately $4.3 million dollars in unliquidated obligations that needed to be closed out. ATF was one of the first bureaus in DOJ to implement DOJ’s Unified Financial Management System (UFMS). According to ATF acquisition officials, they use UFMS to identify contracts that need to be closed based on the elapsed period of performance date and if the contracts have unliquidated obligations. In addition, ATF has two staff dedicated to closing contracts and senior ATF acquisition officials meet quarterly to discuss the progress of contracts due for closeout. ATF does not have the ability, however, to use UFMS to identify contracts that do not have unliquidated obligations. Further, FBI, BOP, and ATF do not have specific goals and performance measures in place. Having agency-wide information on contracts due for closeout could help DOJ in its oversight of contract closeout by identifying and addressing challenges that could be similar across its components. State management does not have information on the extent of contracts that are eligible or overdue for closeout across the agency or where the contracts are in the closeout process. Having such information could help the agency in its oversight of contract closeout and address challenges at an agency-wide level. Further, State has not established goals and performance measures to assess its progress in reducing its over-age contracts. While State does not have information on the total number of contracts due for closeout, in 2009 it established a contract closeout team that tracks contracts for which it provides closeout support at the request of contracting officers. As of November 2016, the contract closeout team was working on 128 contracts due for closeout. State acquisition officials stated that they are working to improve their ability to track when newly awarded contracts become eligible for closeout. In October 2016, State implemented a pilot to identify contracts based on the period of performance end date to identify the number of contracts ready for closeout on a quarterly basis. The pilot added new data fields into its system for contracts awarded or modified since October 2016. This is intended to help contracting officers monitor their contracts and move forward in the closeout process. The pilot ended in April 2017, and State acquisition officials expect full implementation within 18 months. While this initiative can have positive outcomes if implemented as planned, it does not pertain to contracts awarded prior to October 2016. For those older contracts, the new fields will not be applicable, limiting State’s insight into those contracts. The lack of information on the full scope of contracts that need to be closed and where they are in the contract closeout process, coupled with the absence of goals and performance measures, means that State will not be able to track its progress across the agency on closing contracts over time. Further, without this information, it could hinder the agency’s ability to target its approaches to address the causes as to why contracts remain open and make it difficult to identify areas that may need improvement. According to the federal standards for internal control, management should use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks—such as recovering improper payments or identifying unspent funds for use elsewhere. While DOD is generally required to use DCAA for contract audit support services, DCAA also provided these services to civilian agencies— including DHS, State, and HHS—on a reimbursable basis. As noted previously, the NDAA for Fiscal Year 2016 included a provision prohibiting DCAA from performing audit support services on behalf of other federal agencies until DOD certified that DCAA had reduced its incurred cost audit inventory to below 18 months. Starting in January 2016 DCAA notified the civilian agencies for which they had planned to do reimbursable work in fiscal year 2016 that it would no longer be able to perform audits for them until it met the statutory requirements. This affected approximately 500 audits that DCAA had planned to perform. DCAA had to coordinate with the agencies that it does work for to determine if DOD had audit responsibility over certain contractors. For contractors that did not fall under DOD cognizance, the other agencies had to identify alternate means to meet their contract audit needs. Several of the agencies we reviewed took actions to obtain audit services. For example, DHS established blanket purchase agreements with private auditing firms to conduct incurred cost audits for its contractors, while State issued orders off of an existing one. DHS acquisition officials stated that since the vast majority of DHS’ contractors are also DOD contractors, the agency intends to continue to rely on DCAA for audits where it is already performing efforts for DOD and can provide a timely response. In instances where DCAA services are not available or if DCAA cannot provide a timely response, DHS plans to use the private firms for contractors for which they have cognizance. As of July 2017, DHS had not awarded any orders off of its blanket purchase agreements. State awarded four orders off of an existing blanket purchase agreement for incurred cost audit support from a private accounting firm. State acquisition officials stated that they received two incurred cost audit reports and will begin the process of negotiating rates with the contractor. They also stated that that they will continue to use DCAA because DOD has cognizance over many of its contractors. HHS officials stated that some of their components use DCAA for incurred cost audits, but others are using alternate options such as conducting the audit work internally or contracting out to private firms for audit support services. Further, in April 2016, the Office of Management and Budget sent out a survey to federal agencies to gauge the effect of the DCAA prohibition and determine the agencies’ audit needs. The survey determined that there was enough of a need for contract audit support services that, in August 2016, the Federal Aviation Administration—in coordination with the Office of Management and Budget—led a civilian agency working group to address this gap. Since then, the working group conducted market research to identify the extent to which civilian agencies relied on DCAA or private sector providers to perform financial audits on their behalf, the extent to which the private sector can address the need for contract audit support services, and potential contract approaches to meet the needs of civilian agencies. For example, the working group determined that federal agencies spent about $100 million annually on contract audit support services, either through reimbursable work performed by DCAA or through contracts awarded to private accounting firms. The working group is working with the Office of Management and Budget and the General Services Administration on a contract solution that maximizes existing contracts already available across multiple agencies using the General Services Administration’s Federal Supply Schedules program. Further, the working group is preparing an ordering guide to assist agencies with placing contracts for contract audit related services. The guide, expected to be completed by August 2017, will also identify best practices to address concerns regarding the quality of audits. DCAA made progress in reducing its inventory of incurred cost proposals awaiting audit by about half since fiscal year 2011 and has closed more than three-quarters of its oldest proposals—those submitted for years prior to fiscal year 2014. This reduction was due to several initiatives that DCAA implemented in recent years, such as risk-based sampling, conducting multi-year audits, and dedicating more staff resources to conduct incurred cost audits. DCAA did not, however, meet its goal of having 2 years of incurred cost proposals in its inventory by fiscal year 2016 and may not be able to meet its revised goal to do so by the end of fiscal year 2018. Further, our work identified two areas in which DCAA may be missing opportunities or currently lacks information to help identify additional ways to reduce its inventory. These areas include: (1) assessing actions for reducing the amount of time it takes for DCAA to begin an incurred cost audit and establishing related performance measures to assess its progress and (2) evaluating the use of multi-year auditing and establishing related performance measures. DCAA has reduced its overall inventory of incurred cost proposals awaiting audit from about 31,000 in fiscal year 2011 to about 14,000 as of the end of fiscal year 2016. Over that same time period, DCAA reduced what it characterizes as its backlog of old incurred cost proposals—those proposals submitted for fiscal year 2013 and prior—from 21,000 to below 5,000. DCAA did not, however, meet its original goal of having a 2-year inventory of audit proposals—eliminating its backlog of proposals older than 2 years—by fiscal year 2016 and acknowledged that meeting its revised goal to do so by the end fiscal year 2018 will be challenging. DCAA policy officials stated that they were unable to meet the goal of eliminating the backlog due to resource constraints, including workforce challenges, such as hiring freezes. Overall, as of the end of fiscal year 2016, DCAA’s total inventory included 14,208 incurred cost proposals, representing approximately $825 billion in auditable dollar value (ADV) (see figure 3). DCAA attributes its progress in reducing its total inventory as well as its backlog of incurred cost proposals awaiting audits to various efforts, such as management attention in prioritizing incurred cost audits, as well as two specific initiatives—the implementation of a risk-based approach to identify proposals for audit and multi-year audits, in which multiple proposals are done under a single audit. DCAA has reduced its inventory primarily through the use of a risk-based approach to conducting audits. Under this approach, DCAA focused its resources on conducting audits of proposals that it deemed high-risk or exceeded $250 million in ADV. According to DCAA policy officials, DCAA auditors are supposed to make the risk assessment concurrently when determining that a proposal is adequate. Factors that DCAA considers when conducting a risk assessment include whether a specific risk was identified by an external source—such as a contracting officer—or the audit team has identified a specific risk that has a material impact to the proposal being assessed, business system deficiencies, and prior audit experience with the contractor, among others. DCAA officials stated that DCAA audits all proposals that are deemed high risk regardless of ADV. As of the end of fiscal year 2016, DCAA’s data indicates contractors had submitted 9,309 incurred cost proposals that were either deemed adequate by DCAA or were awaiting an adequacy review. DCAA reported it had made a risk assessment on 8,426, or about 91 percent, of those proposals. DCAA policy officials stated that several factors contributed to the gaps on the status of proposals, such as instances where audits on proposals for earlier years for a contractor are ongoing and DCAA would need to consider the results of those audits when assessing risk for proposals for later years. For incurred cost proposals that were deemed low-risk or were $250 million or below in ADV, DCAA would audit a certain percentage of those proposals, with the percentages varying by different strata. As a result, DCAA conducts far fewer audits on low-risk, lower-dollar value proposals than it did prior to initiating the risk-based approach in 2012. For low-risk and lower-dollar value proposals that were not sampled, DCAA issues a low-risk memorandum that recommends the contracting officers use his or her authority to determine the contractor’s final indirect cost rates and proceed with closing the contract. Unless they are assessed to be high-risk, DCAA will close the majority of these proposals with a low-risk memorandum. Since the risk-based initiative was implemented in 2012, DCAA issued a total of 18,292 low-risk memorandums to close out proposals, compared to a total of 9,641 incurred cost audit reports. In developing the risk-based approach, DCAA assessed the costs associated with performing audits at different ADV against the savings associated with identifying unallowable or questioned costs. DCAA determined that it had a higher return on investment for higher value ADV proposals and that the return on investment was negative for audits conducted on lower-dollar proposals. For example, DCAA reported that even under its risk-based approach, it conducted 767 audits on incurred cost proposals with ADVs of $1 million or less from fiscal years 2014 through 2016, but expended approximately $18 million more in staff resources than the government received by identifying unallowable or questioned costs. DCAA policy officials stated they regularly assess results and, if appropriate, revise the sampling percentages. DCAA policy officials also noted that the use of multi-year auditing— through which it combines audits of two or more incurred cost proposals into a single audit—has helped reduce the inventory. According to DCAA’s data, multi-year auditing reduced the average number of hours to conduct an audit by 40 percent over conducting separate single-year audits. DCAA, however, does not actively track at the agency level how many proposals have been closed or are planned to be closed using this process. DCAA policy officials stated that DCAA’s management information system does not have a specific field to collect information on open proposals that are planned to be closed using multi-year audits. Instead, DCAA policy officials stated that they can determine the number of proposals closed through multi-year auditing once the audit reports have been issued. DCAA reported that it used multi-year audits to close 1,232 and 1,536 incurred cost proposals, in fiscal years 2015 and 2016, respectively, which constituted about 13 percent and 19 percent, respectively, of the total number of incurred cost proposals closed in those years. While DCAA has made progress in reducing its inventory of incurred cost proposals awaiting audit, our work identified two areas in which DCAA may be missing opportunities or lacking information to help identify additional ways to reduce its inventory. These areas include: (1) assessing actions for reducing the amount of time it takes for DCAA to begin audit work and establishing related performance measures to monitor its progress and (2) evaluating the use of multi-year auditing and establishing related performance measures. Federal standards for internal control call for the establishment of clear, consistent objectives and the identification and analysis of what measures will be used to determine if an agency is achieving those objectives. DCAA’s data for fiscal year 2016 indicate that once a contractor submits an adequate incurred cost proposal, it took DCAA on average 885 days— or nearly 2 and a half years—before DCAA completed the incurred cost proposal audit. Further, our analysis found that DCAA’s backlog of contractor proposals submitted for 2013 and prior years includes 51 adequate proposals that have $1 billion or more in ADV submitted by at least 15 of DOD’s largest contractors for which audits have not been completed. The number of days from the date these 51 proposals were determined adequate ranged from 78 to 2,206 days at the end of fiscal year 2016, meaning that a contractor submitted an adequate cost proposal more than 6 years ago but DCAA has not yet completed the audit. According to DCAA policy officials, staff availability is the primary factor for the delay before starting audit work. For example, proposals closed in fiscal year 2016 waited in DCAA’s queue an average of 747 days before the start of audit work. From the time that DCAA initiated the audit—which it defines as the date DCAA holds an entrance conference with the contractors—it took DCAA about 138 days on average to complete the audit in fiscal year 2016. For the average days that DCAA took to complete incurred cost audits from fiscal years 2011 through 2016, see figure 4. DCAA policy officials attributed the delay in initiating an audit once adequacy is determined to several factors, including the lack of staff, and when adequate proposals are submitted—the majority of which are received in June each year, which leaves little time to take action before the end of the fiscal year. Further, DCAA officials noted that, historically, DCAA used a “6-24-6” month framework for conducting incurred cost audits. DCAA officials noted that the FAR provides contractors 6 months to submit an incurred cost proposal, and, if DCAA is able to complete its audit of that proposal within 24 months, contracting officers will have 6 months to close out flexibly-priced contracts. Delays in receiving an adequate proposal will affect contracting officers’ ability to close out the contracts in a timely manner. Even though the 6-24-6 framework is not being met in practice, DCAA has not established specific goals for initiating audits, nor assessed whether the 6-24-6 framework under which it currently operates should be revised to take into account the realities of the time frames for contractors to submit adequate proposals or DCAA’s own staffing shortages. Assessing and implementing options to reduce the amount of time DCAA takes to begin its incurred cost audit work and establishing performance measures could help DCAA further reduce its inventory. Complicating DCAA’s ability to plan and initiate audit work are proposals submitted by contractors that are determined to be inadequate. While DCAA has started, or could start audit work on almost 90 percent of the backlog of 4,328 incurred cost proposals which were considered adequate, more than 10 percent—or 452—proposals were still considered inadequate. For example, we identified 10 proposals of $1 billion or more in ADV submitted for fiscal years 2011 through 2013 by major defense contractors that were still considered inadequate as of September 2016. These 10 proposals collectively amounted to about $36 billion in ADV, or about 9 percent of DCAA’s total amount of ADV in its backlog. For fiscal years 2014 through 2016, about 45 percent of the incurred cost proposals submitted by contractors were considered inadequate. Figure 5 depicts the extent to which proposals associated with DCAA’s incurred cost inventory were considered adequate, which includes proposals pending an adequacy review, and proposals that were considered inadequate. DCAA officials acknowledged that they do not currently have insight into the reasons why DCAA determined that a contractor’s proposal was inadequate, the number of times that a contractor submits revised proposals until it is deemed adequate, or the length of time it takes to receive an adequate proposal, but they noted they recently began an initiative to do so. Additionally, DCAA recently began to study the feasibility of developing a web-based submission portal for incurred cost proposals that could allow contractors the option to submit their proposals with real time visibility and guidance on common issues. This could lessen the number of times proposals are returned by DCAA as inadequate since contractors could identify potential issues prior to initially submitting proposals. Additionally, DCAA does not actively track how many proposals are planned to be closed using multi-year audits. These audits accounted for 19 percent of the total number of incurred cost proposals closed in fiscal year 2016, and, according to DCAA policy officials, DCAA would like to continue the use of multi-year audits to gain work efficiencies by combining proposals under one audit. DCAA has not, however, fully evaluated how the process could be improved nor established related performance measures, such as the number of proposals closed, ADV examined, the timeliness of the audits, or its impact on contractors. Federal standards for internal control call for the establishment of clear, consistent objectives and the identification and analysis of what measures will be used to determine if an agency is achieving those objectives. Industry representatives noted that multi-year audits take more effort on their part to support—especially for older proposals—and do not enable them to correct deficiencies in a timely fashion. DCAA is aware of these concerns and has sought contractor input about the efficacy and usefulness of multi-year audits, but has not done a comprehensive assessment, including how, if at all, the use of multi-year audits affect industry or determined how the process could be improved. As a result, it would be difficult for DCAA to assess if there are areas to multi-year auditing where additional efficiencies could be gained. We found that closing out contracts is not the highest priority for contracting officers that are charged with awarding and administering contracts for products and services to meet mission needs. Yet this is a critical step to ensure the government receives the goods and services it purchases at the agreed upon price and, if done in a timely manner, provides opportunities to utilize unspent funds for other needs. Most of the agencies we reviewed delegated responsibility for closing out contracts to their components; however, none of the five agencies and only one of the components we reviewed had the critical elements that would assist them in overseeing their efforts to more effectively manage their respective contract closeout backlog. Having centralized information on the number and type of contracts that need to be closed out and where the contracts are in the closeout process could help management address the causes as to why contracts remain open in order to reduce the contract closeout backlog. While agencies may tailor their approaches to their specific needs and organizational structures, federal internal control standards require that agency management use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks—especially if the risks and challenges are similar across the agency. DCAA’s investment of significant management attention and resources, as well as its use of risk-based approaches to conducting audits, has enabled DCAA to significantly reduce both its overall inventory and its backlog of older incurred cost proposals. Doing so should help contracting officers close more of their outstanding flexibly-priced contracts, enable DCAA to focus more of its resources on other audit responsibilities, and reduce some of the burden on industry. Despite this progress, DCAA has 14,208 incurred cost proposals in its inventory, representing approximately $825 billion in ADV as of fiscal year 2016; hence, DCAA cannot afford to miss opportunities to further improve its incurred cost audit processes and timeliness. In this regard, DCAA has not assessed options and has not established performance measures for reducing the length of time to begin audit work on incurred cost proposals; the primary reason for the delay is due to the availability of DCAA staff to begin the audit work. Further, DCAA has not fully assessed or established performance measures on the use of multi-year audits, which it hopes to expand to help further reduce its inventory. Without such information, DCAA will be missing opportunities to assess options for further reducing its inventory of incurred cost proposals. To enhance management attention to closing out contracts, we are making the following seven recommendations, one to each of the five agencies in our review and two to DCAA to manage its incurred cost inventory. The Secretary of Defense should develop a means for department- wide oversight into components’ progress in meeting their goals on closing contracts and the status of contracts eligible for closeout. (Recommendation 1) The Secretary of Health and Human Services should develop a means for department-wide oversight into components’ progress in meeting their goals on closing contracts and the status of contracts eligible for closeout. (Recommendation 2) The Secretary of Homeland Security should develop a means, either at the agency or the component level, to track where the contracts are in the closeout process, and establish goals and performance measures for closing contracts. (Recommendation 3) The Attorney General should direct the Senior Procurement Executive to ensure the development of a means to track data on the number and type of contracts eligible for closeout and where the contracts are in the closeout process, as well as a means to assess—at the agency or component level—progress by establishing goals and performance measures for closing contracts. (Recommendation 4) The Secretary of State should develop a means at the agency level to track data on the entirety of the number and type of contracts eligible for closeout, where the contracts are in the closeout process, and establish goals and performance measures for closing contracts. (Recommendation 5) The Director, DCAA should assess and implement options for reducing the length of time to begin incurred cost audit work and establish related performance measures. (Recommendation 6) The Director, DCAA should comprehensively assess the use and effect of multi-year audits on both DCAA and contractors and establish related performance measures. (Recommendation 7) We provided a draft of this report to DOD, DHS, State, HHS, and DOJ for review and comment. With the exception of DHS, the agencies concurred with our recommendations. DOD, DHS, and State provided written comments, which are reprinted in appendixes I-III, respectively, and summarized below. In comments provided in emails from the respective audit liaisons, DOJ and HHS concurred with the recommendations. DOD, DHS, and DOJ also provided technical comments, which we incorporated as appropriate. DOD concurred with our recommendation to develop a means for department-wide oversight into components progress in meeting their goals on closing contracts and the status of contracts eligible for closeout. Additionally, DOD concurred with our recommendations that the Director, DCAA, assess and implement options for reducing the length of time to begin incurred cost audit work and to comprehensively assess the use and effect of multi-year audits. DOD also agreed that the length of time to start an incurred cost audit should be minimized to the maximum extent practicable. DOD noted that in addition to the initiatives identified in our report, DCAA will continue to assess options to improve timeliness and implement actions to do so. Further, DOD agreed to conduct a more comprehensive analysis regarding the use and effect of multi-year audits on the contractors being audited as well as customers relying on the audit reports. DOD plans to complete these actions by March 31, 2018. Our draft report also included a recommendation that DCAA develop a means to centrally track risk assessments determinations. In responding to our draft report, DCAA provided additional information on how risk determinations were tracked and provided supporting data. We verified that information and, as a result, removed the recommendation and incorporated the data into the report as appropriate. DHS did not concur with our recommendation to develop a means to track where contracts are in the closeout process and establish related performance goals and measures. DHS agreed that contract closeout was a critical step in the procurement process, but noted that contract closeout activities are limited to available resources. DHS stated that the agency does not have a tool to track where contracts are in the closeout process and that obtaining such a tool would be resource-intensive. Further, DHS noted that having such a tool would not provide DHS with an effective way to remove bottlenecks from the closeout process. DHS noted, however, that it is committed to improving the closeout process and intends to establish a working group to assess current close out metrics and related performance measures. Additionally, the working group will assess the list of contracts eligible for closeout, monitor the progress of reducing the backlog and determine whether existing tools are available for obtaining information on the closeout process. The working group will also recommend improvements based on the availability of resources for closeout actions by November 30, 2018. While we did not call on DHS to obtain a new tracking tool, we believe that the planned efforts of the working group could meet the intent of our recommendation. State agreed with our recommendation to develop a means to track where contracts are in the closeout process and establish related performance goals and measures. State noted that it anticipates developing goals and performance measures by December 2019. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Director, Defense Contract Audit Agency; the Director, Defense Contract Management Agency; the Director, Defense Logistics Agency; the Secretaries of Health and Human Services, Homeland Security, and State; the Attorney General; appropriate congressional committees; and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Congress enacted a provision in the National Defense Authorization Act (NDAA) for Fiscal Year 2016 which prohibited the Defense Contract Audit Agency (DCAA) from conducting audits for non-defense agencies unless the Secretary of Defense certified that DCAA’s backlog of incurred cost audits was less than 18 months of incurred cost inventory. The law defined DCAA’s incurred cost inventory as the level of contractor incurred cost proposals from prior fiscal years that were currently being audited by DCAA. In September 2016, the Under Secretary of Defense (Comptroller) certified to Congress that DCAA had reduced its incurred cost proposal inventory below the required 18 month threshold to an average of 17.6 months. To support this determination, DCAA used its total inventory of incurred cost proposals that were in its inventory as of August 2016. DCAA made several adjustments to this total inventory to remove those incurred cost proposals for which it could not conduct an audit (i.e. were determined to be inadequate) and removed the number of reimbursable proposals (since these proposals are primarily for non-defense agencies which DCAA was prohibited from doing by the NDAA for Fiscal Year 2016) and direct-cost only work. Using the revised number, DCAA then took the number of elapsed days since the contractor submitted an adequate incurred cost proposal and divided by 30 days to approximate the number of elapsed months. The number of elapsed months was then averaged across the inventory to arrive at their total inventory calculation. In addition to the contact named above, Bruce Thomas (Assistant Director), Anh Nguyen (Analyst-in-Charge), Peter Anderson, Giny Cheong, William Cordrey, Lorraine Ettaro, Kurt Gurka, Julia Kennon, Elisha Matvay, and Roxanna Sun made major contributions to this report.
|
Closing contracts is a key step in the contracting process. GAO and others have previously reported that large numbers of contracts were not closed within time frames set by federal regulations, which can expose the government to financial risk. DCAA's backlog of audits of contractors' incurred cost proposals contribute to the delays in closing out flexibly-priced contracts. GAO was asked to review the extent of the contract closeout backlog at federal agencies. In addition, a House Armed Services Committee report included a provision for GAO to assess DCAA's incurred cost audit backlog. This report addresses the extent to which (1) selected federal agencies effectively manage contract closeout, and (2) DCAA effectively manages its incurred cost audit backlog. GAO selected five agencies based on the number of contracts awarded and dollars obligated in fiscal year 2015. GAO analyzed documents and interviewed acquisition officials to assess how contract closeout is managed. GAO also analyzed data on DCAA's incurred cost audit backlog. The effectiveness of management efforts to reduce the number of contracts overdue for closeout varied across five agencies GAO reviewed—the Departments of Defense, Health and Human Services, Homeland Security (DHS), Justice, and State. None of the agencies had critical elements agency-wide that would help track and oversee contract closeout processes—the number and type of contracts to be closed, where the contracts were in the process, and goals and performance measures. Having such information could help management address the causes as to why contracts remain open and reduce the contract closeout backlog. Since 2011 the Defense Contract Audit Agency (DCAA) has reduced its inventory of contractors' incurred cost proposals awaiting audit by about half to 14,208, and DCAA has significantly reduced its backlog of older proposals—those for 2013 and prior—as of September 2016. To do so, DCAA used a risk-based approach to reduce the number of audits and began conducting multi-year audits, in which two or more incurred cost proposals are closed under a single audit. Nevertheless, DCAA did not meet its initial goal of eliminating its backlog by fiscal year 2016, and DCAA officials stated that they are unlikely to meet its revised goal by the end of fiscal year 2018. Further, GAO found that in fiscal year 2016, DCAA averaged 885 days from when a contractor submitted an adequate incurred cost proposal to when the audit was completed. The lag was due to limited availability of DCAA staff to begin audit work, as it took DCAA an average of 138 days to complete the actual audit work (see figure). DCAA may be missing opportunities to help identify additional ways to reduce its inventory. For example, DCAA has not assessed options to reduce time to initiate audit work or comprehensively assessed how the use of multi-year audits could be improved and has not established related performance measures for both. GAO is making seven recommendations, including to each of the five agencies to develop means to track critical elements on contract closeout efforts and to DCAA to assess its efforts to reduce its backlog and establish related performance measures. Four agencies concurred, and DHS identified planned actions that could address the intent of the recommendation.
|
DOD is one of the largest and most complex organizations in the world to manage effectively. While DOD maintains military forces with unparalleled capabilities, it continues to confront pervasive, decades-old management problems related to its business operations—which include outdated systems and processes—that support these forces. These management weaknesses cut across all of DOD’s major business areas, such as human capital management, including the department’s national security personnel system initiative; the personnel security clearance program; support infrastructure management; business systems modernization; financial management; weapon systems acquisition; contract management; and last, but not least, supply chain management. All of these areas are on our high-risk list for DOD. Supply chain management consists of processes and activities to purchase, produce, and deliver materiel—including ammunition, spare parts, and fuel—to military forces that are highly dispersed and mobile. DOD relies on defense and service logistics agencies to purchase needed items from suppliers using working capital funds. Military units then order items from the logistics agencies and pay for them with annually-appropriated operations and maintenance funds when the requested items—either from inventory or manufacturers—are delivered to the units. Since 1990, DOD supply chain management (previously, inventory management) has been on our list of high-risk areas needing urgent attention because of long-standing systemic weaknesses that we have identified in our reports. Our high-risk series reports on federal government programs and operations that we have identified, through audits and investigations, as being at high risk due to their greater vulnerabilities to fraud, waste, abuse, and mismanagement. In recent years, we also have identified high-risk areas to focus on the need for broad-based transformations to address major economy, efficiency, or effectiveness challenges. The high-risk series serves to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. DOD has taken a number of steps to improve supply chain management in the past several years, including preparing strategic planning documents and experimenting with a new way to manage its logistics portfolio. In 2005, the Under Secretary of Defense (Acquisition, Technology, and Logistics) released the Focused Logistics Roadmap, which presented an “as-is” compendium of logistics programs and initiatives and provided a baseline for future focused logistics capability analysis and investment within DOD. With the release of the “as-is” roadmap, DOD also identified a need for a future-oriented “to-be” roadmap. DOD released the “to-be” roadmap, now known as the Logistics Roadmap, in July 2008. In a separate effort, the Deputy Secretary of Defense began, in September 2006, testing a new approach for managing the development of joint capabilities and included joint logistics as a test case. This concept, capability portfolio management, is an effort to manage groups of similar capabilities across the DOD enterprise to improve interoperability, minimize capability redundancies and gaps, and maximize capability effectiveness. In February 2008, the Deputy Secretary of Defense issued a memo formalizing the first four test cases, including joint logistics, and setting out plans for further experimentation with five additional test cases. In that memo, the Under Secretary of Defense (Acquisition, Technology, and Logistics) was designated the capability portfolio management civilian lead for logistics, with U.S. Transportation Command serving as the military lead. According to the memo, the capability portfolio managers will make recommendations to the Deputy Secretary of Defense and the Deputy’s Advisory Working Group on capability development issues within their respective portfolio. In addition, the memo states that the capability portfolio managers have no independent decision-making authority and will not infringe on existing statutory authorities. A DOD directive, issued in September 2008, established the policy for using capability portfolio management to advise the Deputy Secretary of Defense and the Heads of the DOD Components on how to optimize capability investments across the defense enterprise. DOD has identified total asset visibility as a key focus area for improving supply chain management. DOD has defined total asset visibility as the ability to provide timely and accurate information on the location, movement, status and identity of units, personnel, equipment and supplies; and the capability to act on that information to improve the overall performance of DOD logistics practices. We have previously reported on issues associated with DOD’s lack of asset visibility. DOD’s latest roadmap includes a number of initiatives and programs that involve the implementation of IUID and RFID, two technologies that enable electronic identification and tracking of equipment and supplies and that DOD expects will improve its asset visibility. DOD’s 2007 Enterprise Transition Plan lists IUID and RFID as enablers to achieve the goal of end–to-end materiel visibility in the DOD supply chain. Specifically, the plan states that IUID enables the accurate and timely recording of information on the location, condition, status and identity of appropriate tangible personal property to ensure efficient and effective acquisition, repair, and deployment of items, and states that IUID will contribute to improvements in the responsiveness and reliability of the DOD supply chain. The plan also states that RFID will improve process efficiencies in shipping, receiving, and inventory management, contribute to reductions in cycle time, and increase confidence in the reliability of the DOD supply chain through increased visibility of the location of an item or shipment. IUID includes the application of a data matrix through direct inscription or placement of a permanent machine-readable label or data plate onto an item. The data matrix contains a set of data elements that form a unique item identifier. This data matrix identifies an individual item distinctly from all other items that DOD buys and owns, similar to the vehicle identification number on a car. Items can be marked either by the vendor before entering into DOD’s inventory, or by a DOD component after DOD takes possession of an item. In both cases, information about the item and the mark are uploaded to the IUID Registry, which is located in Battle Creek, Michigan, and managed by the Defense Logistics Agency. The registry serves as the central repository for data about all of the items in the DOD inventory that have been marked with a UID data matrix. Although the registry is intended to contain information about all of the marked items, DOD has issued policy indicating that the registry is not to be used as a property accountability system or to maintain detailed transaction data. As part of its IUID initiative, DOD plans to use this data to more closely track items and more effectively manage its inventory. In July 2003, DOD directed that all new solicitations and contracts issued on or after January 1, 2004, require the use of IUID for items meeting established criteria. Additionally, in December 2004, the IUID policy was updated to require the application of UID to legacy items (that is, existing personal property items in inventory and operational use). In this memo, DOD requested all program and item managers plan to complete this marking by the end of 2010. The number of items this requirement covers is unknown. DOD officials estimate it is probably around 100 million; however, they stated the actual number of items could be much higher. RFID is a data input system that consists of (1) a transponder, generally referred to as a tag; (2) a tag reader, also known as an interrogator, that reads the tag using a radio signal; (3) centralized data processing equipment; and (4) a method of communication between the reader and the computer. The reader sends a signal to the tag, which prompts the tag to respond with information about the item to which it is attached. The information is forwarded to central data processing equipment, which can then be used to get detailed information about the container or item, such as the shipping date or the date received. The information contained in the central data processing equipment can provide visibility over inventory items throughout the supply chain. DOD’s RFID policy, issued on July 30, 2004, finalizes business rules for implementing two types of RFID tags— active and passive. This report focuses on DOD’s implementation of passive RFID, which is a newer technology than active RFID and less well- established in DOD’s supply chain. We previously examined DOD’s implementation of passive RFID in September 2005. A passive RFID tag is an electronic identification device consisting of a chip and an antenna, usually embedded within a “smart” packaging label. Passive RFID tags have no battery; they draw power from the reader, which sends out electromagnetic waves that induce a current in the tag’s antenna. Passive RFID readers transmit significant power to activate the passive tags and are not currently approved for use on ammunition, missiles, or other potentially explosive hazards. Primary responsibility for determining how and where to implement IUID and RFID, as well as funding the implementation and operations of these technologies, resides with DOD components. These costs include the purchase of necessary equipment, costs associated with marking and tagging items, and changes to automated supply systems. In an effort to coordinate the components’ efforts to implement various automatic identification technologies, DOD designated U.S. Transportation Command as the lead functional proponent for RFID and related AIT implementation within the DOD supply chain in September 2006. U.S. Transportation Command subsequently published an AIT concept of operations in June 2007 and an implementation plan for this concept of operations in March 2008. Additionally, the Unique Item Identification Policy Office was established in 2002 in the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) to develop and implement unified IUID policy across DOD. Although DOD intended that its Logistics Roadmap would provide a comprehensive and integrated strategy to address logistics problems department-wide, we found that the roadmap falls short of this goal. The roadmap documents numerous initiatives and programs that are under way and organizes these around goals, joint capabilities, and objectives. However, the roadmap lacks three elements necessary in a comprehensive, integrated strategy which would make it a more useful tool for DOD’s senior logistics leaders in guiding, measuring, and tracking progress toward achieving DOD logistics goals and objectives—key stated purposes of the roadmap. First, the roadmap does not identify the scope of logistics problems or gaps in logistics capabilities, information that could allow the roadmap to serve as a basis for establishing priorities to improve logistics and address any gaps. Second, the roadmap lacks outcome-based performance measures that would enable DOD to assess and track progress toward meeting stated goals and objectives. Finally, DOD has not clearly stated how it intends to integrate the roadmap into its decision- making processes and who will be responsible for this integration. Without a strategy that provides a basis for determining priorities and identifying gaps, that includes key strategic planning elements, and that is integrated into decision-making processes, DOD will have difficulty guiding, measuring, and tracking progress toward meeting its logistics goals and objectives and providing the visibility needed to fully inform senior decision makers of logistic needs and priorities across the department. DOD’s Logistics Roadmap, released in July 2008, documents numerous initiatives and programs that are under way within the department. The roadmap includes a total of 56 initiatives and 62 programs, based on information submitted by DOD components. According to the data in the roadmap, the total cost of implementing the initiatives and programs from fiscal year 2008 to 2013 is estimated at more than $77 billion. Table 1 summarizes the initiatives and programs by DOD component. DOD initially began to develop the Logistics Roadmap in response to direction from the Under Secretary of Defense (Acquisition, Logistics, and Technology) in 2005. In the memorandum accompanying the 2005 Focused Logistics Roadmap, the Under Secretary directed the creation of a follow- on “to be” roadmap. While the Under Secretary recognized that the Focused Logistics Roadmap provided a baseline of programs and initiatives for future focused logistics capability analysis and investment and documented significant resource investment in logistics programs and initiatives, he also recognized that the roadmap indicated that key focused logistics capabilities would not be achieved by 2015. As a result, he expected the “to be” roadmap to present credible options for achieving focused logistics capabilities for consideration by the Defense Logistics Board. The “to be” roadmap eventually became the Logistics Roadmap, released in July 2008 by the Deputy Under Secretary of Defense (Logistics and Materiel Readiness). Officials in the Office of the Secretary of Defense (OSD) characterized the “to be” roadmap as an effort to portray where the department was headed in the logistics area, how it would get there, and what progress was being made toward achieving its objectives. Further, they said the roadmap would institutionalize a continuous assessment process linking ongoing capability development, program reviews, and budgeting. DOD officials also testified that the roadmap would include a detailed depiction, over time, of existing, planned, and desired capabilities to effectively project and sustain the joint force. Moreover, they said the roadmap would establish a coherent framework for achieving the best and most cost-effective joint logistics outcomes to support the warfighter. We have emphasized the importance of DOD developing an overarching logistics strategy that will guide the department’s logistics planning efforts and have stated that without an overarching logistics strategy, the department will be unable to most economically and efficiently support the needs of the warfighter. Although DOD originally intended for the roadmap to be issued in February 2007, the department suspended its development while it tested its new capability portfolio management concept. Joint logistics was one of the capability areas included in this test. In November 2007, the Office of Supply Chain Integration, under the direction of the Deputy Under Secretary of Defense (Logistics and Materiel Readiness), began the formal development of the roadmap by coordinating with the military services, combatant commands, the Defense Logistics Agency, and other OSD offices to gather information on their logistics initiatives and programs. The initial data call from the Deputy Under Secretary requested that DOD components identify logistics-related initiatives (e.g., RFID and the Single Army Logistics Enterprise) and acquisition programs of record (e.g., C- 130J Hercules and Fuel System Supply Point) that are critical to successfully meeting logistics capability needs. The Deputy Under Secretary requested additional information about the initiatives and programs, such as a description, expected benefits and impact, implementation milestones, and resources. OSD, in presenting information on the department’s logistics initiatives and programs, structured the roadmap around three goals, three joint capabilities, and 22 objectives. The objectives in the roadmap are aligned to three logistics goals that were enumerated in DOD’s Guidance for Development of the Force, a department-wide strategic planning document that followed the 2006 Quadrennial Defense Review. The three goals are as follows: unity of effort – the synchronization and integration of joint, multinational, interagency, and non-governmental logistics capabilities focused on the joint force commander’s intent; visibility – having assured access to information about logistics processes, resources, and requirements in order to gain the knowledge necessary to make effective decisions; and rapid and precise response – the ability to meet the constantly changing logistics needs of the joint force. The objectives are aligned further with three joint capability areas that DOD has identified for joint logistics. These joint capabilities are as follows: supply – the ability to identify and select supply sources, schedule deliveries, receive, verify and transfer product, and authorize supplier payments; the ability to see and manage inventory levels, capital assets, business rules, supplier networks and agreements, as well as assessment of supplier performance; maintain – the ability to manufacture and retain or restore materiel in a deployment and distribution – the ability to plan, coordinate, synchronize, and execute force movement and sustainment tasks in support of military operations, including the ability to strategically and operationally move forces and sustainment to the point of need and operate the Joint Deployment and Distribution Enterprise. The 22 objectives were developed by OSD and each is generally aligned to both a goal and a joint capability, although some objectives are aligned with multiple joint capabilities. OSD provided guidance to the participating DOD components on how to align their initiatives and programs with the objectives. Table 2 summarizes the organization of the roadmap, including the number of initiatives and programs linked to each objective. OSD intends for the Logistics Roadmap to serve as a starting point for improvement efforts across the department. In the message from the Deputy Under Secretary of Defense (Logistics and Materiel Readiness), included at the beginning of the roadmap, the Deputy Under Secretary explained that the roadmap initiates the process of defining the department’s logistics capability portfolio in terms of initiatives and programs, and documents specific actions under way to achieve logistics goals and supporting objectives, examining them from the perspective of experts who must advise senior leaders. In addition, he stated that the roadmap begins an evolutionary process of linking logistics initiatives and program performance assessments to identifiable and measurable outcomes. Finally, he explained that the roadmap is intended to be part of an ongoing process of assessment and feedback linked to the Quadrennial Defense Review and to the department’s Planning, Programming, Budgeting, and Execution cycles, and to be a tool for the DOD logistics community to use in guiding, measuring, and tracking progress of the ongoing transformation of logistics capabilities. OSD also expects to update and improve the roadmap periodically. The Office of Supply Chain Integration, under the Deputy Under Secretary of Defense (Logistics and Materiel Readiness), stated that an updated roadmap may be completed in the summer of 2009. According to the Deputy Under Secretary’s message in the roadmap, future updates to the roadmap will incorporate new initiatives and programs, as well as results from capability-based assessments, joint experiments, and joint technology demonstrations; report progress toward achieving logistics capability performance targets; and help connect capability performance targets to current and planned logistics investment for an overarching view of DOD’s progress toward transforming logistics. In its current form, the Logistics Roadmap lacks three elements that are needed in order for it to serve as a more useful tool for DOD’s senior logistics leaders in guiding, measuring, and tracking progress toward achieving DOD logistics goals and objectives—one of the key stated purposes of the roadmap. Specifically, the roadmap does not identify the scope of DOD logistics problems and capability gaps and lacks outcome- oriented performance measures. Additionally, DOD has not clearly stated how the roadmap will be integrated into its decision-making processes and who will be responsible for this integration. DOD officials stated that they plan to remedy some of these weaknesses in their future efforts to update and expand the roadmap. The Logistics Roadmap does not identify the scope of DOD’s logistics problems or gaps in logistics capabilities. In interviews prior to developing the roadmap, DOD officials responsible for the roadmap said that it would identify the scope of DOD’s logistics problems and gaps in logistics capabilities. This information, if included, could allow the roadmap to serve as a basis for logistics decision makers to establish priorities for formulating, funding, and implementing corrective actions. However, the current roadmap does not include a discussion about department-wide or DOD component-specific logistics problems. For example, the roadmap does not discuss logistics problems encountered during the ongoing operations in Iraq and Afghanistan. Similarly, while the roadmap links initiatives and programs to three joint capabilities, it does not indicate where there are gaps in either current or desired capabilities. Without addressing the scope of logistics problems and gaps in capabilities, the roadmap’s utility is limited and it does not fully inform senior decision makers of the warfighters’ logistics needs or provide them with a basis for determining priorities to meet those needs by filling capability gaps. Addressing logistics capabilities is a core function of the roadmap. For example, according to the roadmap, it initiates the process of defining the department’s logistics capability portfolio in terms of initiatives and programs, and provides a foundation for future logistics capability assessments and investment analyses. In addition, the roadmap states that the Guidance for the Development of the Force, from which the roadmap’s three goals are drawn, directs DOD to focus on better integrating its logistics capabilities and processes to meet the demands of an emerging operational environment. The roadmap also states that it will allow the department’s senior leaders to more effectively advocate for the logistics initiatives and programs most critical for providing globally responsive, operationally precise, and cost-effective logistics support for the warfighter. In addition, DOD officials stated that the roadmap should be of use in helping decision makers as they determine whether current programs and initiatives are sufficient to close any capability gaps that may be identified. DOD officials have begun a series of assessments for 3 of the 22 objectives in the roadmap and directed DOD components to develop these assessments to identify capability gaps, shortfalls, and redundancies and to recommend solutions. DOD views such assessments as essential for providing a strategic view of the department’s progress toward achieving the goals and objectives of the roadmap. DOD officials said that the results of all 22 of these assessments will be included in the next version of the roadmap, tentatively scheduled for release in the summer of 2009. Until the assessments for each of the 22 objectives are completed, the roadmap will not begin to provide senior decision makers with a basis for determining priorities for developing and maintaining logistics capabilities to support the warfighter. The roadmap lacks outcome-based performance measures that would enable DOD to assess and track progress toward meeting stated goals and objectives. Prior to its development, OSD officials said the roadmap would allow the department to monitor progress toward achieving its logistics objectives, and include specific performance goals, programs, milestones, resources, and metrics to guide improvements in supply chain management and other areas of DOD logistics. Based on interviews with OSD officials prior to the completion of the roadmap, we previously reported that the roadmap would include performance measures and link objective, quantifiable, and measurable performance targets to outcomes and logistics capabilities. However, we found that the roadmap does not include outcome-based performance measures of the objectives, which would allow DOD to measure progress toward meeting these stated objectives. While many of the individual initiatives include performance goals or implementation milestones, the objectives lack such measures. We also found that although the objectives were categorized by DOD-wide logistics goals, they were not linked to those goals with performance or cost metrics. The lack of outcome-based performance measures makes it difficult to measure progress on how the objectives are meeting the stated goals. An official from the Office of Supply Chain Integration, responsible for leading the development of the roadmap, stated that performance measures or assessments of the objectives to measure progress were not included in this version of the roadmap because of a tight schedule for its completion and release. As noted previously, DOD decided to delay development of the roadmap until the capability portfolio management test cases had been completed; however, they had committed to Members of Congress that the roadmap would be released by the summer of 2008. Within this time frame, officials said they were unable to address performance measures or assessments. They stated that future versions of the roadmap will include these elements, and assessments to measure progress toward achieving 3 of the 22 objectives were ongoing at the time we conducted our audit work. In October 2008, we requested descriptions of the assessment approach and methodology; however, the DOD official coordinating the assessments indicated that the assessments were a work in progress and the approach had not been finalized. We have emphasized the importance of performance measures as management tools for all levels of an agency, including the program or project level, to track an agency’s progress toward achieving goals, and to provide information on which to base organizational and management decisions. In a previous review of the Supply Chain Management Improvement plan, we found that many of the initiatives in the plan, as well as the three focus areas these initiatives were to address, lacked outcome-focused performance measures, limiting DOD’s ability to fully demonstrate the results achieved through its plan. We also found that the plan lacked cost metrics that might show efficiencies gained through these supply chain improvement efforts, either at the initiative level or overall. Without outcome-focused performance measures and cost metrics, DOD is unable to fully track progress toward meeting its goals for improving logistics from the component to the department level, limiting the department’s ability to fully demonstrate results achieved through the roadmap. Increasing DOD’s focus on measurable outcomes will enable the department’s internal and external stakeholders, including OMB and Congress, to track the interim and long-term success of its initiatives and help DOD determine if it is meeting its goals of achieving more effective and efficient supply chain management. Performance metrics are critical for demonstrating progress toward achieving results and providing information on which to base organizational and management decisions. Inadequate information on performance may be an impediment to improving program efficiency and effectiveness. DOD has not clearly stated how it intends to integrate the roadmap into its decision-making processes and who will be responsible for this integration. For example, DOD has not shown how the roadmap could shape logistics budgets developed by individual DOD components or address joint logistics needs through the new capability portfolio management process. According to the Deputy Under Secretary’s message at the beginning of the roadmap, the document will be part of on ongoing assessment and feedback process linked to the Quadrennial Defense Review and the Planning, Programming, Budgeting, and Execution cycles and will support senior leader decision making in a constrained resource environment. However, on the basis of our review, we found that DOD has not clearly stated the manner in which the roadmap will be formally or informally used within these processes, how it will be used to inform senior decision makers, and who will be responsible for its implementation. In our prior work on DOD’s transformation efforts, we have emphasized the importance of establishing clear leadership and accountability for achieving transformation results, as well as having a formal mechanism to coordinate and integrate transformation efforts. In the absence of clear leadership, accountability, and a formal implementation mechanism, DOD may have difficulty in resolving differences among competing priorities, directing resources to the highest priorities, and ensuring progress if changes in senior personnel occur. DOD officials explained that procedures for how DOD officials use the roadmap within these existing processes have not been formalized, but provided various scenarios in which the assessments associated with the roadmap’s objectives could possibly be used. They stated that upon completion of the assessments for the individual objectives, the assessments could be inserted into program and budget reviews, and could be used to inform the development of future versions of the Quadrennial Defense Review and the Guidance for the Development of the Force. Additionally, an official with the Office of Supply Chain Integration responsible for leading the development of the roadmap stated the assessments could be incorporated into DOD’s budget process to document the current status of initiatives and programs, and could aid in identifying redundancies across DOD. DOD officials have stated various ways in which the roadmap and its associated assessments could be useful to senior decision makers, but they have not clearly defined how the products will be used to inform the Quadrennial Defense Review, Guidance for the Development of the Force, and the budget process. Some DOD component officials who participated in the development of the roadmap said it could be useful in the capability portfolio management process. However, DOD officials stated that because capability portfolio management was still new and had not been formalized at the time the roadmap was under development, they were not sure how it would be implemented and how or if the roadmap could be useful in this process. As mentioned previously, the roadmap defines the logistics portfolio and in light of the recent formalization of the joint logistics capability portfolio, the roadmap could serve as the starting point to assist the capability portfolio managers with their responsibilities. The capability portfolio managers for joint logistics, the Under Secretary of Defense (Acquisition, Technology and Logistics) and the Commander, U.S. Transportation Command, are responsible for providing recommendations or advice to appropriate DOD decision makers and forums regarding integration, coordination, and synchronization of capability requirements for capability investments, and for evaluating capability demand against resource constraints, identifying and assessing risks, and suggesting capability trade-offs within their portfolio to the heads of the DOD components. Given that capability portfolio management has been recently formalized, it remains to be seen how the capability portfolio managers will implement the process and what types of information they will need to fulfill their responsibilities. A comprehensive integrated strategy to address logistics problems department-wide is critical, in part, because of the diffuse organization of DOD logistics. Responsibility for logistics within DOD is spread across multiple components with separate funding and management of logistics resources and systems. For example, the Under Secretary of Defense (Acquisition, Technology and Logistics), as part of OSD, serves as the principal staff element of the Secretary of Defense in the exercise of policy development, planning, resource management, fiscal, and program evaluation responsibilities. The Secretary of Defense designated the Under Secretary of Defense as the department’s Defense Logistics Executive with authority to address logistics and supply chain issues. However, each of the military services is separately organized under its own secretary and functions under the authority, direction, and control of the Secretary of Defense. The secretaries of the military departments are responsible for organizing, training, and equipping their forces under Title 10 of the United States Code. DOD policy states that each of the secretaries is directed to prepare and submit budgets for their respective departments, justifying before the Congress budget requests, as approved by the President; and to administer the funds made available for maintaining, equipping, and training their forces. As we have previously reported, the diffuse organization of DOD’s logistics operations complicates DOD’s ability to adopt a coordinated and comprehensive approach to joint logistics. Until the roadmap provides a basis for determining priorities and identifying gaps, incorporates performance measures, and is integrated into decision- making processes, it is likely to be of limited use, beyond the current processes and information available, to senior DOD decision makers as they seek to improve supply chain management. DOD has taken several steps toward implementing IUID and passive RFID but may face challenges achieving widespread implementation because it is unable to fully demonstrate the return on investment associated with these efforts to the military components that have primary responsibility for determining how and where these technologies are implemented. DOD and its military components have made some progress adopting these two technologies. These efforts include developing policy and guidance, establishing working groups and integrated process teams to share information and lessons learned both within and across the military components, providing funding to support implementation, and establishing pilot projects and initial implementation efforts at several locations. Despite these signs of progress, full implementation of IUID and passive RFID is still several years away under current time frames. At present, DOD is not able to fully quantify the return on investment associated with these technologies because it does not uniformly collect complete information on both the costs and benefits associated with implementing IUID and passive RFID. Additionally, effective integration of these technologies with supply chain processes and information systems is challenging and will require the military components to make significant commitments of funding and staff resources. Without the ability to fully demonstrate that the benefits of IUID and passive RFID justify the costs and efforts involved, DOD is likely to face difficulty gaining the support needed from the military components to overcome challenges associated with implementation. DOD and its military components have taken several steps to facilitate, support, and undertake the implementation of IUID and passive RFID. Use of IUID and passive RFID was required by memoranda issued by the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics) in July 2003 and July 2004, respectively, and DOD and its military components have periodically issued policy and guidance to manage and inform users regarding the implementation of both technologies. For example, U.S. Transportation Command, the lead functional proponent for the implementation of AIT, including IUID and passive RFID, released an AIT Concept of Operations (CONOPS) in June 2007 and an AIT Implementation Plan in March 2008. The CONOPS and Implementation Plan provide information on DOD’s future vision for AIT use across the supply chain and are intended to establish a baseline standard for AIT use and implementation throughout DOD. Guidance on these technologies has also been published by DOD. For example, DOD has provided guidance concerning the use of IUID to support improved maintenance and materiel management processes, as well as detailed information on the technology and the mechanics of its implementation. DOD has taken other actions to support and facilitate the implementation of IUID and passive RFID. DOD established a UID Policy Office and designated staff resources toward RFID implementation in the Office of Supply Chain Integration. In addition to helping disseminate policy and guidance, the two offices play a role in promoting the technologies and educating the military components regarding implementation. For example, the offices have established Web sites for suppliers, program managers, and others involved in implementation efforts to access information on the technologies, including specifications and requirements, tutorials and trainings, guidance for implementation, and updates to existing policy and guidance. Additionally, the UID Policy Office holds biannual UID Forums to provide practical guidance to help educate military program managers and DOD contractors regarding IUID implementation, and the Supply Chain Integration Office holds annual RFID summits to highlight best practices across the department and provide a forum for discussion of RFID technologies and their potential applications to supply chain management. In addition to guidance developed at the department level, the military components are developing service-specific implementation plans for IUID and passive RFID. As of October 2008, the Army had issued a service-wide strategy for IUID implementation, and the Marine Corps and Air Force had both completed draft IUID implementation plans. While the Navy does not have a formal service-wide IUID implementation plan, a Navy official responsible for managing IUID implementation stated its draft serialized item management implementation plan contains information pertaining to DOD IUID guidance and requirements. For passive RFID, the Navy and Air Force had completed plans for implementation of the technology, the Army had completed a draft implementation plan, and the Marine Corps was in the process of updating its existing RFID implementation plan to incorporate information from the DOD AIT CONOPS. Efforts to implement the technologies also include information sharing across DOD and within its military components. DOD and its military components have established integrated process teams and working groups to define objectives and establish implementation timelines, identify common implementation challenges and potential solutions, and facilitate stakeholder communications. These teams focus on several areas related to implementation and operate both within and across the military components. For example, U.S. Transportation Command formed multiple integrated process teams dedicated to different segments of supply and distribution operations during the development of its AIT Implementation Plan, which encompasses both IUID and passive RFID. Additionally, the UID Policy Office has established and participated in a number of working groups to support the development and implementation of IUID policy. Integrated process teams and working groups also operate within the military components. For example, in September 2007, the Navy formed an IUID integrated process team whose four working groups meet monthly to discuss metrics for measuring implementation progress, technical solutions for implementation challenges, process mapping of implementation efforts, and internal and external communications regarding implementation. In December 2007, the Army also formed an IUID integrated process team, which developed the Army-wide implementation strategy for IUID and continues to meet to share lessons learned and discuss challenges related to implementation. The military components, DLA, and U.S. Transportation Command have funded implementation of both IUID and passive RFID through various mechanisms and to varying degrees. For instance, the Army funds AIT, which includes both IUID and passive RFID, through its regular budget process. Army officials estimated that, in fiscal years 2007 and 2008, the Army has spent $22.5 million on the implementation of IUID and has requested an additional $15 million per year for fiscal years 2009 through 2013. For passive RFID, Army officials estimated that the Army spent $2.2 million between the third quarters of fiscal year 2006 and 2008. Other services, however, do not uniformly provide designated funding for implementation. For example, Navy officials stated that implementation of IUID within the Navy is an unfunded mandate and funding for implementation must be taken out of operational budgets. Air Force officials also stated that funding for implementation is taken out of operational budgets by program managers. Additionally, DLA and U.S. Transportation Command funded a project that spanned multiple military components. Pilot projects and initial implementation efforts for both IUID and passive RFID are under way at multiple locations throughout the military components. Table 3 lists examples of pilot projects and initial implementations that DOD officials identified as important ongoing efforts. The implementation efforts listed in the table vary in scope, in terms of both the number of components and installations involved and the amount of resources required for full implementation. For example, the Alaska RFID Implementation project, which aimed to test and evaluate passive RFID within the DOD supply chain in order to streamline supply chain operations, spanned multiple military components and cost more than $27 million to implement. As a part of this pilot, passive RFID infrastructure was installed at DLA, Army, and Air Force locations in Alaska and California. Other implementation efforts, however, have been smaller and less resource intensive. For instance, the Robotic Systems Joint Project Office, which works to procure, field, sustain, and support ground robotics for the Army and the Marine Corps, implemented IUID at its Joint Robot Repair Fielding division at a cost of approximately $400,000 during fiscal years 2007 and 2008. The project office established a process for marking new acquisitions to its inventory with item unique identifiers and, to maximize the benefits of implementation, integrated IUID into its existing supply chain management data system. Full implementation of IUID and passive RFID remains several years away under current time frames. Although DOD initially projected that all items currently in its inventory required to be marked under IUID guidance would be marked with unique item identifiers by fiscal year 2010, officials stated that this target will not be met. According to DOD officials, as of October 2008 approximately 4 percent of the estimated 100 million items currently in DOD inventory have been marked with item unique identifiers. DOD officials stated that, at the current pace of implementation, full marking of legacy items will take many additional years. For example, the Air Force estimates that it will take until fiscal year 2021 to complete marking parts already in inventory with item unique identifiers. Since 2005, Air Force officials estimated that the Air Force has marked 10,000 items in its inventory while the total number of Air Force items required to be marked exceeds 12.5 million. The DOD AIT Implementation Plan estimates that the implementation of technologies, including passive RFID will be completed in 2015; however, current time frames indicate that it may take longer to fully implement the technology. Initial pilots of passive RFID called for in the DOD AIT Implementation Plan are under way at selected locations in each military service, but a DOD official responsible for coordinating passive RFID implementation across the department stated that the services are still in the process of gathering baseline information and the technology will not be fully functional at these locations until the end of fiscal year 2009. Additionally, according to the DOD AIT Implementation Plan, updated automatic information systems needed to support passive RFID and IUID may not be functional until after 2015. Updates to these systems are necessary in order for the components to derive benefit from these initiatives. Furthermore, while infrastructure for reading passive RFID tags is in place in multiple locations throughout the military components, additional work is required to reach full implementation. According to a September 2008 report by the DOD Inspector General on DLA’s implementation of passive RFID, 10 percent of supply contracts examined did not contain the required RFID clause and suppliers for 43 percent of contracts containing the required clause did not apply passive RFID tags to shipments they sent to depots. The Inspector General also found that installation-level understanding of the use and application of passive RFID was limited and additional training was needed to increase awareness of the technology and its application. Although implementation of IUID and passive RFID will require significant funding commitments and staff resources from the military components, DOD does not gather the cost and performance information needed to fully demonstrate return on investment for the technologies to the military components that have primary responsibility for determining how and where these technologies are implemented. While DOD gathers information on some of the costs associated with implementation, cost estimates do not include all of the funding or staff resources provided by the services to support implementation because funding for implementation at the component level is frequently taken out of operational accounts, rather than being directly allocated. The March 2008 DOD AIT Implementation Plan identified $744 million in programmed AIT- related funding for fiscal years 2008 through 2013, but does not include in its estimate funding that the military components take from operational accounts to support implementation efforts. A 2005 memo from the Under Secretary of Defense (Acquisition, Technology, and Logistics) requires acquisition programs to specifically identify funding for IUID in budget submissions. However, several officials from the military services stated that they divert resources from other efforts in order to facilitate implementation of IUID and passive RFID. Navy officials stated that implementation of IUID within the Navy is treated as an unfunded mandate and program managers at the installation level must take funding out of operational budgets in order to support implementation efforts. Army officials have faced similar challenges. For example, program managers involved in the Army’s implementation of IUID for small arms have had to release staff from other tasks to assist in the marking of weapons with item unique identifiers. Since funding and staff resources are often provided in this indirect manner, the total resources expended on the implementation of IUID and passive RFID may not be visible to decision makers, both at the component level and across DOD. Additionally, DOD does not require the military components to gather or report on outcome-based performance measures to demonstrate the extent to which benefits are being accrued through the implementation of IUID and passive RFID. While DOD does gather some information to assess implementation efforts across the military components, the information collected focuses on measures of implementation progress and does not include outcome-based performance measures. For example, while OSD and the military components are required to provide updates to DOD at quarterly IUID Scorecard Reviews, reporting requirements focus on the execution of implementation plans rather than benefits accrued from implementation. At the July 2008 scorecard review, military components provided installation-level implementation plan status updates and reported on implementation efforts, such as issuance of new policies and outreach activities. Furthermore, while U.S. Transportation Command’s AIT Implementation Plan identifies potential performance measures for automatic identification technologies and establishes a schedule to begin collecting some data in 2009, the military components have not yet been required to collect or report information pertaining to these metrics. Senior DOD officials involved in the implementation of passive RFID stated that they plan to collect this information in the future. During our site visits, officials at some locations were able to describe qualitative benefits derived from the implementation of IUID or passive RFID. However, the officials had not quantified the benefits they had observed. For instance, Army officials cited a number of benefits from the implementation of IUID by the Robotic Systems Joint Project Office. These included reductions in inventory size, shipping and receiving time, and data entry errors and increases in data quality, robustness, and processing speed. However, officials stated that they had not attempted to quantify these benefits. Other officials cited installation-level qualitative benefits for implementing passive RFID. For example, officials from DLA’s Defense Distribution Center in San Joaquin, California, said the implementation of passive RFID reduced the amount of time needed to prepare shipments. However, they lacked key data to quantify the extent of the time savings. Additionally, only limited efforts have been made to gather the baseline information needed to quantify change in performance outcomes over time. For instance, DLA gathered baseline information on shipping and receiving operations at the Defense Distribution Center in San Joaquin in September 2008, despite beginning its implementation of passive RFID in November 2004. Without data on the costs and benefits associated with the technologies, it is difficult for DOD to create a business case or other analysis that would fully demonstrate return on investment from implementing IUID and passive RFID to the military components. Both OMB and DOD have established guidance for conducting such analyses. The stated goal of OMB Circular A-94 is to promote efficient resource allocation through well-informed decision making by the federal government, and the circular provides general guidance on comparing the costs of alternative means of achieving the same objective or stream of benefits. Additionally, according to DOD Instruction 7041.3, economic analyses are an integral part of the planning, programming and budgeting system of the department, and economic analysis calculations should include information on the costs and benefits associated with alternatives under consideration. While OSD and the military components have conducted some studies to assess the business case for the use of IUID and passive RFID, these studies have had mixed results. For example, a June 2008 analysis of alternatives for AIT in base-level Air Force supply and distribution processes found that implementation of the RFID vision presented in the DOD AIT CONOPS was not optimal, based on the costs and benefits associated with implementation. Instead, the Air Force determined that its current state of operations, with limited incorporation of passive RFID, functioned both effectively and efficiently. Broader analyses of return on investment, however, have arrived at different results. DOD released a business case analysis of passive RFID in April 2005 that projected overall cost savings from implementation of passive RFID would range from $70 million to $1.781 billion over a 6-year period and found that there is a reasonable to good expectation that implementation of passive RFID across DOD will provide an economic return on investment in the near term and an excellent expectation of economic returns in the long term. Additionally, a March 2005 cost benefit analysis of IUID performed by OSD found that implementation of the technology would deliver benefits in both the short and long terms. However, these department-wide business case analyses for both technologies have been characterized by DOD officials involved in the coordination and management of IUID and passive RFID as overly broad and unconvincing because analyses have been largely based on data from private industry implementation efforts. DOD officials stated that the April 2005 DOD business case analysis for passive RFID and the March 2005 DOD IUID business case analysis were both high-level efforts that were discounted by the military components for overstating potential benefits of the technologies, as well as the time frame in which those benefits would be achieved. In 2005, we identified unclear return on investment as an impediment to the implementation of passive RFID. This impediment remains today. Since return on investment for both IUID and passive RFID is not always clear to the military components charged with their implementation, it is difficult for DOD to convince program managers at the installation level to invest time and resources toward overcoming challenges associated with implementing the technologies at the expense of other competing priorities. For example, officials from both the Army and the Navy who have responsibility for coordinating and managing implementation of these technologies in their respective components stated that implementation of IUID is given low priority by program managers, who do not see the benefits associated with implementation. DOD officials agreed that program managers resist implementation of the technologies when the value of implementation is unclear. In our previous work on supply chain management, we have stated that it is important for the Office of the Secretary of Defense to obtain the necessary resource commitments from the military services, DLA, and other organizations, such as U.S. Transportation Command, to ensure that initiatives are properly supported. At present, DOD’s inability to fully quantify return on investment has impeded implementation progress, as the military components charged with carrying out implementation are unable to clearly discern the benefits of the technologies and are reluctant to devote time and resources for implementation, rather than for competing priorities. Effective integration of these technologies with supply chain processes and information systems is challenging and requires the military components to make significant commitments of funding and staff resources, often without promise of short-term benefit. As noted previously, DOD identified $744 million in programmed funding that will be necessary in fiscal years 2008 through 2013 to achieve the vision laid out in the AIT Implementation Plan. Military service officials stated that tasks required to achieve full implementation include installation of infrastructure and training of personnel to understand and use the technologies. Additionally, costly and complex business process changes are necessary for the military components to enable interoperability between automatic information systems used to gather data from IUID marks and passive RFID tags and service-specific supply data systems. Without these changes, data gathered through IUID and passive RFID cannot be accessed to derive benefit from the technologies. In some cases, data are not being gathered at all. Officials at three out of four locations participating in the implementation of the Alaska RFID Implementation Project stated they derive no benefit yet from passive RFID as a result of the lack of integration between RFID data collection platforms and supply chain information systems. Deriving benefit from IUID implementation has also been difficult. Officials from multiple military components stated that while IUID marking efforts are time consuming and resource intensive, lack of data system integration prevents implementation benefits from being realized. Without a clear return on investment, achieving the integration necessary to derive benefit from the technologies may be resource intensive to a degree that discourages the military components from investing in technology solutions. For instance, faced with a lack of information system interoperability, the Army decided against investing in technologies that would allow its legacy supply systems to use IUID and passive RFID data. Instead, the Army decided to delay obtaining benefit from the technologies for multiple years until Army-wide information systems that can directly communicate with one another are operational. Army officials stated that the costs associated with implementing an interim solution were prohibitive, given the uncertain return on investment for the technologies in the near term. The importance of supply chain management to the operational capability of U.S. forces, as well as the considerable resources being spent in this area, highlight the importance of addressing long-standing problems that have resulted in our designation of this DOD function as a high-risk area. Given the diffuse organization of DOD’s logistics operations, senior DOD decision makers need a comprehensive, integrated strategy to guide the department’s efforts to make significant improvements. Although DOD’s Logistics Roadmap represents the latest attempt to establish such a strategy for the department, the lack of key elements we identified in our review calls into question the utility of this roadmap in addressing supply chain problems. Further, without the inclusion of these key elements, it will be difficult for DOD to demonstrate progress in addressing these problems and provide Congress with assurance that the DOD supply chain achieves DOD’s goal of providing cost-effective joint logistics support for the war fighter. Therefore, it will be important that DOD officials follow through on their intent to remedy weaknesses in the roadmap. Although incorporating IUID and passive RFID into the DOD supply chain offers the promise of technologies that may be able to help address long- standing problems of inadequate asset visibility, the department is unable to fully quantify the return on investment associated with the technologies to those in the military components responsible for implementation. Cost and benefit information collected from actual implementation efforts could form the basis for quantifying return on investment and help to encourage the military components to allocate resources that will be needed for widespread implementation of these technologies. Until the military components place higher priority on integration of IUID and passive RFID into their business processes, DOD will not realize the benefits it expects to achieve from these initiatives. To improve DOD’s ability to guide logistics initiatives and programs across the department and to demonstrate the effectiveness, efficiency, and impact of its efforts to resolve supply chain management problems, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) take the following three actions necessary to have a comprehensive, integrated strategy for improving logistics: Identify the scope of logistics problems and capability gaps to be addressed through the Logistics Roadmap and associated efforts. Develop, implement, and monitor outcome-focused performance measures to assess progress toward achieving the roadmap’s objectives and goals. Document specifically how the roadmap will be used within the department’s decision-making processes used to govern and fund logistics and who will be responsible for its implementation. To improve the likelihood DOD will achieve the potential benefits it expects from the implementation of IUID and passive RFID, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics), in conjunction with the military components, take the following two actions: Collect detailed information on the costs, including costs currently being funded from operational accounts, and performance outcomes for ongoing and future implementation of these two technologies. On the basis of these data, develop an analysis or analyses of the return on investment to justify expanded investment of resources in the implementation of the technologies. We also recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director of the Defense Logistics Agency to determine, on the basis of the above analysis or analyses, whether sufficient funding priority has been given to the integration of these technologies into their respective business processes and, if not, to take appropriate corrective action. In its written comments on a draft of this report, DOD concurred with our recommendations and identified a number of corrective actions it has taken or plans to take. While we believe DOD’s actions, for the most part, respond to the issues raised in this report, several questions remain, including both the methodology and time frame for DOD’s assessments of the objectives in the roadmap. On the basis of DOD’s comments, we have modified our fourth recommendation to specify that DOD collect information on all costs, including costs currently being funded from operational accounts, associated with implementing these two technologies. The department’s written comments are reprinted in appendix II. DOD concurred with our three recommendations focused on improving its Logistics Roadmap and cited actions to address the recommendations. DOD stated that the roadmap is a living document and the department continues progressing toward developing a more coherent and authoritative framework for guiding its logistics improvement efforts. Specifically, DOD stated that it has completed an initial review of three of the roadmap’s objectives as the framework for finalizing an assessment methodology. This initial review is intended to identify gaps, shortfalls, timing issues, and challenges throughout DOD’s supply chain. DOD also stated that, in addition to monitoring existing performance metrics, such as customer wait time, the department will determine which specific outcome-based performance measures can be linked to each of the objectives and goals within the roadmap. Finally, DOD stated that it has established an executive advisory committee to ensure that the roadmap is a useful tool in decision making. Our report describes the ongoing assessment effort that DOD cites in its comments. Although DOD did not provide a time frame for completing these assessments, DOD officials have previously stated that they tentatively expect to have all 22 assessments completed for the next iteration of the roadmap in July 2009. Because DOD was not able to provide information on its assessment methodology, we could not determine whether these assessments are likely to address the information gaps we identified in the current roadmap regarding the scope of DOD’s logistics problems and capability gaps; nor could we determine the extent that these assessments might result in outcome-oriented performance measures that would enable DOD to assess progress toward achieving the roadmap’s goals and objectives. DOD’s decision to form an executive advisory committee appears to be a positive step. However, it remains unclear at this time how the roadmap will be integrated within the department’s existing decision-making processes used to govern and fund logistics; therefore, DOD will need to take additional steps to clarify how it intends to use the roadmap. DOD also concurred with our three recommendations aimed at improving the likelihood that the department will achieve the potential benefits it expects from implementing IUID and passive RFID. DOD cited a number of efforts to identify and collect performance metrics for IUID and passive RFID and to analyze this information to justify the expanded investment of resources in their implementation. DOD further stated it will review the services’ Program Objective Memorandum inputs to ensure that, based on the department’s AIT investment plan, sufficient funding priority is given to integrating these technologies into their respective business processes. Our review indicated that much work remains for DOD to collect complete and useful performance data. Additionally, DOD did not indicate plans to gather additional cost information pertaining to the implementation of IUID and passive RFID. We continue to believe that cost information associated with the implementation of these technologies is important to any analysis of return on investment. As we noted in the report, some funding for the implementation of IUID and passive RFID is being taken out of operational accounts. Current POM information may not provide a complete picture of the costs associated with the implementation of IUID and passive RFID. Therefore, DOD should gather detailed information on the full costs associated with the implementation of both IUID and passive RFID, including those funded from operational accounts. We have modified our recommendation accordingly. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the U.S. Marine Corps; the Commander of U.S. Transportation Command; the Director of the Defense Logistics Agency; and the Director, Office of Management and Budget. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have questions concerning this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense’s (DOD) July 2008 Logistics Roadmap serves as a comprehensive, integrated strategy to improve DOD logistics, we reviewed its content and organization, as well as documents relating to its development, including DOD guidance to the components regarding submitting information and reviewing draft copies of the roadmap. We also reviewed memoranda directing components to conduct assessments for specific objectives included in the roadmap. We reviewed prior DOD logistics strategies and plans, including the 2005 Focused Logistics Roadmap and the DOD Plan for Improvement in the GAO High Risk Area of Supply Chain Management with a Focus on Inventory Management and Distribution, as well as other DOD strategic plans such as the Enterprise Transition Plan and the Quadrennial Defense Review. We reviewed DOD statements about the intended purposes of the roadmap that were made in congressional hearings, in discussions with our office conducted during prior GAO work in this area, and in the roadmap itself. We identified sound management principles based on prior work evaluating strategic planning efforts and performance assessments. We obtained information on DOD’s logistics capabilities portfolio management test case by reviewing DOD guidance and interviewing officials within the Office of the Joint Chiefs of Staff, who were responsible for managing the test case for joint logistics. We interviewed officials from DOD components submitting information for the roadmap, including the Army, Navy, Air Force, Marine Corps, the Defense Logistics Agency, the U.S. Transportation Command, the U.S. Joint Forces Command, and the Offices of the Assistant Deputy Under Secretaries of Defense for Supply Chain Integration, Transportation Policy, and Maintenance Policy and Programs. Over the course of these interviews, we obtained pertinent information and perspectives on the roadmap, efforts to compile and review the information included in the roadmap, and potential uses of the roadmap for logistics decision making. To obtain information on the progress DOD has made implementing item unique identification (IUID) and passive radio frequency identification (RFID), we reviewed DOD’s overall concept of operations and implementation plan for automatic identification technology, which includes IUID and passive RFID. We obtained briefing documents describing the status of IUID and passive RFID implementation. We obtained and reviewed various service-level implementation plans for IUID and RFID; however, because the majority of these plans were only recently released or in draft form, we did not evaluate the adequacy of these service-level plans. We also reviewed Office of Management and Budget (OMB) and DOD guidance on benefit-cost analysis and economic analysis for decision making. We visited and conducted interviews with officials involved in the coordination and management of these technologies within the Office of the Secretary of Defense (OSD), Defense Logistics Agency (DLA), the U.S. Transportation Command, and the military services. Additionally, we visited and observed the use of passive RFID technology at DLA’s Defense Distribution Center in San Joaquin, California; Travis Air Force Base, California; and the Naval Base Kitsap in Bangor, Washington. We also visited and observed the use of IUID at the Robotic Systems Joint Project Office and the Army Aviation and Missile Command, Alabama. We also interviewed officials at the following locations involved in implementing either IUID or passive RFID: Anniston Army Depot, Alabama; Army Project Manager Soldier Weapons, New Jersey; Navy Extremely High Frequency Satellite Communications Branch, California; Naval Air Systems Command, Maryland; Elmendorf Air Force Base, Alaska; Fort Richardson, Alaska; and Air Mobility Command, Illinois. We also interviewed officials responsible for managing the IUID registry in Battle Creek, Michigan. We also interviewed officials in the DOD Inspector General’s Office to review concurrent work that office is conducting on passive RFID. We conducted this performance audit from January 2008 through January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tom Gosling (Assistant Director), Grace Coleman, Nicole Harms, Brooke Leary, Andrew McGuire, Paulina Reaves, and Ben Thompson made significant contributions to this report.
|
Military operations in Iraq and Afghanistan have focused attention on the performance of the Department of Defense's (DOD) supply chain management. According to DOD, it spent approximately $178 billion on its supply chain in fiscal year 2007. As a result of weaknesses in DOD's management of its supply chain, this area has been on GAO's list of high-risk federal government programs since 1990. DOD released its Logistics Roadmap in July 2008 to guide, measure, and track logistics improvements. DOD has identified two technologies included in this roadmap, item unique identification (IUID) and passive radio frequency identification (RFID), as having promise to address weaknesses in asset visibility. GAO reviewed (1) the extent to which the roadmap serves as a comprehensive, integrated strategy to improve logistics; and (2) the progress DOD has made implementing IUID and passive RFID. GAO reviewed the roadmap based on DOD statements about its intended purposes and visited sites where IUID and passive RFID were implemented. The Logistics Roadmap falls short of meeting DOD's goal to provide a comprehensive and integrated strategy to address logistics problems department-wide. The roadmap documents numerous initiatives and programs that are under way and aligns these with goals and objectives. However, the roadmap lacks key information in three areas necessary for it to be a more useful tool that DOD's senior leaders can use to guide and track logistics improvement efforts toward achieving stated goals and objectives. First, the roadmap does not identify the scope of logistics problems or gaps in logistics capabilities, information that could allow the roadmap to serve as a basis for establishing priorities to improve logistics and address any gaps. Second, the roadmap lacks outcome-based performance measures that would enable DOD to assess and track progress toward meeting stated goals and objectives. Third, DOD has not clearly stated how it intends to integrate the roadmap into DOD's logistics decision-making processes or who within the department is responsible for this integration. DOD officials stated they plan to remedy some of these weaknesses in their follow-on efforts. For instance, DOD has begun to conduct gap assessments for individual objectives in the roadmap and hopes to complete these by July 2009. They stated that they recognized the need for these assessments; however, they had committed to Members of Congress to release the roadmap by the summer of 2008 and were unable to conduct the assessments prior to the release of the roadmap. A comprehensive, integrated strategy that includes these three elements is critical, in part, because of the diffuse organization of DOD logistics, which is spread across multiple DOD components with separate funding and management of logistics resources and systems. Until the roadmap provides a basis for determining priorities and identifying gaps, incorporates performance measures, and is integrated into decision-making processes, it is likely to be of limited use to senior DOD decision makers as they seek to improve supply chain management. DOD has taken initial steps to implement two technologies included in the Logistics Roadmap-IUID and passive RFID-that enable electronic identification and tracking of equipment and supplies; but has experienced difficulty fully demonstrating return on investment for these technologies to the military components that have primary responsibility for determining how and where these technologies are implemented. Although DOD has undertaken initial implementation efforts of these technologies at several locations, at present, it does not collect data on implementation costs or performance-based outcome measures that would enable the department to quantify the return on investment associated with these two technologies. Without this information, it may be difficult for DOD to gain the support needed from the military components to make significant commitments in funding and staff resources necessary to overcome challenges to widespread implementation of these technologies. As a result, full implementation of these technologies is impeded and the realization of potential benefits to asset visibility DOD expects may be delayed.
|
The mailing industry includes businesses, organizations, and other parties that send and rely on mail to maintain contact with their customers. The industry also encompasses mail preparers, including printers and businesses that send or receive mail on behalf of a third party. Collectively, we refer to these two groups as commercial mailers, who in 2008 accounted for 86 percent of all mail processed by USPS. Although commercial mailers number in the millions, approximately 200 of the largest mailers account for around 30 percent of the total mail volume. Since the 1970s, the use of barcodes and automation has increased efficiency in USPS mail processing operations. Commercial mailers have been encouraged to use barcodes through pricing incentives, allowing USPS to cut costs and increase efficiency in its mail processing operations. In particular, automated mail processing machines can sort mail with barcodes containing delivery information faster than mail sorted manually. Over the past three decades, the number and type of barcodes increased along with technology changes, and in 2003 USPS estimated that there were more than 30 different barcodes in use. These barcodes include the following: POSTNET, which contains delivery information that enables automated sorting of the mail to the carrier’s route level. Mailers receive a postage discount when they print POSTNET barcodes on their mail. PLANET, which is a barcode that contains identification numbers to enable tracking mail in USPS’s mail processing system but contains less information than the Intelligent Mail barcode. Certified mail service, which provides mailers notification when mail arrives at its destination. The use of numerous barcodes has led to some drawbacks, such as a cluttered mail piece (see fig. 1). Additionally, whenever USPS adds or upgrades its mail processing equipment, it has to ensure that the equipment remains compatible with each of the relevant barcodes. Through Intelligent Mail, USPS plans to use standardized barcodes to track mail and thus provide USPS and mailers with more information. This information is important to USPS’s efforts to improve efficiency and reduce costs. In addition, it could provide mailers with the status of mail as it moves through USPS’s mail processing system, improving predictability of delivery, as well as providing information on whether some mail—such as bill payments and movie returns—has been sent. Although both USPS and mailers could benefit from the program, implementing the program will require both parties to make considerable changes to their systems and processes. USPS is completing its development of the Intelligent Mail program and plans to implement the program in phases, with the first phase starting in May 2009 and an additional phase planned for November 2009. The overall concept of Intelligent Mail is to provide better information and improve efficiency by using standardized barcodes to continuously track the mail as it passes through USPS’s mail processing system. The program has been led by USPS with input and involvement from the mailing industry. The Intelligent Mail concept was articulated in a 2001 report by the Mailing Industry Task Force, which was led by chief executives of 11 mailing industry companies and USPS’s Deputy Postmaster General. This task force noted that Intelligent Mail would help ensure that mail processing is responsive to customer needs. Since then, USPS has been involved with developing and planning the Intelligent Mail program. For a timeline of the significant events in the development of the Intelligent Mail program, see appendix II. Beginning in May 2009, mailers who choose to participate in Intelligent Mail have two options that offer different incentives based on the level of effort required for mailers to comply: Basic Service. Requires mailers to apply an Intelligent Mail barcode and populate the relevant fields, but not include unique numbers in the barcode. Mailers who implement Basic Service will receive a postage discount for using a barcode (as they would using a POSTNET barcode) but will not receive the postage discount or other benefits associated with Full Service. Full Service. Requires mailers to populate and apply a barcode, but unlike Basic Service, the barcode must contain a number that is unique to the particular mail piece. Full Service mailers must also uniquely barcode any trays or containers they use to package mail and submit electronic documentation of their mailings. USPS provides pricing discounts and other incentives for mailers implementing Full Service because it requires mailers to make more changes and results in the greatest benefit for USPS. The Intelligent Mail program is centered on the Intelligent Mail barcode, a standardized barcode that is information-rich and expands the ability to track individual mail pieces (see fig. 2). This barcode is capable of containing the same information as the current POSTNET and PLANET barcodes combined, in addition to other data, which eliminates the need for multiple barcodes on the same mail piece. The new barcode contains a mailer identification number, assigned by USPS, which enables USPS to identify the sender of the mail piece, and a unique number, generated by the mailer, which enables USPS to track the same mail piece as it travels through its processing system. Since 2006, USPS has permitted mailers to use the Intelligent Mail barcode which enabled mailers to test their ability to print the barcode to conform to USPS standards. In fiscal year 2008, USPS estimated that over 580 mailers had begun using the barcode. USPS has identified several ways it expects the implementation of Intelligent Mail to benefit USPS and mailers: Improve efficiency, reduce costs, and improve timeliness of delivery. USPS says it will be able to use information from Intelligent Mail to improve its processing system. For example, USPS expects it will be able to better identify and diagnose problems, such as systemic bottlenecks that result in costly manual sorting and delivery delays. Also, USPS plans to use Intelligent Mail to create efficiencies by streamlining and automating the process it uses to accept mail from commercial mailers, which is currently time- and labor-intensive. Reduce the amount of mail that must be forwarded, which can involve extra handling by USPS and delays in delivery. As an incentive to adopt Full Service Intelligent Mail, USPS will provide free notification when intended recipients have moved and filed a change-of-address with USPS, a service mailers previously paid for. This feature, known as Address Correction Service, could help USPS meet its goal of reducing the amount of mail that cannot be delivered as addressed. In exchange for this free service, USPS requires mailers to update their mailing lists in order to avoid paying additional fees. Provide better service to mailers. Through Intelligent Mail, USPS plans to provide better service to mailers through real-time feedback. For example, as another incentive to adopt Full Service Intelligent Mail, USPS will provide mailers information on when their mail entered USPS’s system, known as Start the Clock. This information, which was not previously offered by USPS, is helpful because it enables USPS to respond to mailer inquiries on missing or delayed mail. Also, since Intelligent Mail will be uniquely identified, USPS will have the ability to isolate and give special handling to a specific mail piece, which creates an opportunity for USPS to offer mailers new products and services. Service performance measurement capability. Intelligent Mail will allow USPS to gather more comprehensive and detailed service performance information and measure it against established performance standards, which will help keep USPS accountable to its stakeholders. The 2006 Postal Accountability and Enhancement Act required USPS to develop a system to measure service performance and report to PRC. The service performance measurement system proposed by USPS to meet this requirement relies on data from Intelligent Mail. Financial incentives. USPS is also offering a financial incentive to mailers. Specifically, those who adopt Full Service Intelligent Mail will receive a postage discount, in addition to other worksharing discounts. Mailers who use Full Service will receive a three-tenths of 1 cent discount for each First-Class Mail piece they send, while Standard Mail and periodicals will receive a discount of one-tenth of 1 cent for each piece. According to USPS, Intelligent Mail is the most complex project it has undertaken. It also indicated that preparing for and implementing Intelligent Mail will involve considerable changes for both mailers and USPS, including significant changes to information and software systems used by both mailers and USPS. The overall commercial mail process using Intelligent Mail, including how it impacts mailer and USPS operations, is shown in figure 3. Intelligent Mail requires significant changes to the way mailers prepare and submit their mail. Mailers using Intelligent Mail will need to redesign their mail pieces by populating and applying the new barcode. Full Service mailers will also need to ensure that their barcodes contain a unique tracking number. This means that each mail piece a mailer sends within a 45-day period must have a number imbedded in its barcode that is different from every other piece of mail that the mailer sends within that time frame. Full Service mailers must also apply unique barcodes to mail trays and containers, and document which mail pieces are contained in which tray and container. These changes may result in significant process changes for mailers and may require new software or staff training. Mailers participating in Full Service Intelligent Mail must also make changes to their information systems in order to submit documentation electronically to USPS. The electronic documentation must contain information on all of the Intelligent Mail barcodes used on the mail pieces, trays, and containers; how the mail pieces, trays, and containers fit together; and the identity of the mailer. While some mailers currently submit electronic documentation, many currently submit this information in hard copy format. Mailers must also provide advance notification of their mail drop-off to a postal facility by sending an electronic appointment and will need to ensure that their software systems are able to communicate effectively with USPS’s systems. This may involve purchasing or upgrading software or hardware. Mailers will also need to train their staff on how to use the new software and how to communicate with USPS electronically. Intelligent Mail involves changes to USPS’s operations. For instance, USPS’s current process for mail acceptance and verification is costly, time-consuming, and labor-intensive. Mail acceptance and verification involves mailers bringing mail to a postal facility, USPS accepting it and verifying that it has been prepared according to postal standards, and USPS verifying that the postage has been accurately calculated. Currently, this process involves a postal official physically sampling a portion of the mail to make sure it meets standards and is eligible for the prices claimed in the mailer’s documentation. With Intelligent Mail, USPS plans to be able to scan mail pieces, trays, and containers, and reconcile the information to the documentation that the mailer has sent electronically. USPS envisions that, by using Intelligent Mail, it can eventually automate the verification process and reduce its reliance on manual tests of the mail, making it easier for mailers to hand mail off, thus saving both USPS and mailers time and the related costs. Furthermore, USPS has completed changes to software for its mail processing equipment so it is capable of scanning the new barcodes. USPS officials said they completed these upgrades as they were performing other, routine software upgrades for this equipment. In addition, USPS is changing its information technology systems. As mailers start using Intelligent Mail, USPS expects to process scans from millions of mail pieces containing new barcodes on a daily basis. USPS is developing the information technology infrastructure to scan and track individual mail pieces as they travel through its processing system. First, USPS is developing a new Intelligent Mail information technology system to process, manage, and store data from scanned barcodes. USPS has acquired hardware and is using contractors to develop software for this system. Second, USPS is integrating this new system with existing USPS systems to share data, which requires changes to almost 30 different systems and 59 different connections between these systems. An example of one of the existing USPS systems that must be integrated with the new Intelligent Mail system is PostalOne!, a main communications interface between USPS and mailers. Upgrades to PostalOne! include a better user interface design for mailers, electronic documentation acceptance capabilities, and more options for mailers to access Intelligent Mail tracking information. USPS plans to use barcode scan information to measure its service performance and report the results to PRC. However, USPS will need to establish report content and format standards that comply with PRC reporting requirements. In order to report service performance to PRC, USPS will need to develop a strategy to aggregate information from mail pieces that will be representative of all mail. USPS will also need to develop standards for information that it will provide to mailers regarding their own mail. USPS has said it will be ready to implement Intelligent Mail as planned in May 2009. To meet this date, USPS has been engaged in an aggressive program development schedule since June 2008 that involved defining program requirements and designing, building and testing systems and interfaces. According to USPS, it began defining the requirements of the project in July 2008, designing the systems in October 2008, and started building the systems in November 2008. USPS tested the systems both internally and with mailers from February through April 2009. A limited number of mailers have been involved in testing and integrating their systems with USPS’s systems. Despite this aggressive schedule, a senior postal official told us that USPS discovered that it could not implement all of the desired functions of the system by May 2009, as originally planned. Further, he said that additional functions may be added in future phases that will be determined at a later date. The general timeline for Intelligent Mail implementation is as follows: May 11, 2009. USPS plans to internally implement the first phase of Intelligent Mail and expects to have the systems in place to provide Full Service functions, including Address Correction Service and electronic documentation. May 18, 2009 and beyond. Mailers will begin testing their systems’ ability to access and electronically transmit documentation to USPS’s system and will be ready to fully implement upon completing the tests, which should take about 6 to 8 weeks according to a senior USPS official. November 29, 2009. USPS plans to implement the second phase of Intelligent Mail and expects to begin offering price incentives for Full Service. USPS also plans to add other program functions, although it had not finalized these plans when we met with USPS officials in early April. May 2011. The use of POSTNET and PLANET barcodes will be phased out and mailers seeking reduced automation postage rates will be required to use Intelligent Mail barcodes. Mailers will have approximately 6 months from May through November 2009 to test their systems and begin implementation before the price incentives go into effect. USPS has estimated that by November 2009, enough mailers will be participating in Full Service Intelligent Mail so that it will account for 54 percent of First-Class automation letters and 63 percent of Standard Mail Commercial and Nonprofit automation letters. In fiscal year 2008, these types of automation letters accounted for approximately 100 billion mail pieces. As mailers complete testing and begin generating mail with Intelligent Mail barcodes, USPS has said it is taking steps to ensure that the mail acceptance process goes smoothly for mailers presenting mail at USPS facilities. For example, USPS plans to conduct customized training for both mailers and USPS employees at facilities where mailers will present mail barcoded with Intelligent Mail barcodes. This training will be conducted in the weeks before mailers plan to implement Intelligent Mail. USPS officials also noted that they will offer training at a national-level postal forum in May 2009 and will offer materials that mailers can use to train their staffs. Implementation of the Intelligent Mail program faces two key risks. First, USPS’s approach to developing and managing the program has not followed certain key program management practices to reduce risks and mailers have raised questions about whether USPS and mailers will be able to meet schedule and program objectives. Second, USPS has said that Intelligent Mail success is dependent on mailer participation in the Full Service option, but it is uncertain whether pricing and other incentives will encourage mailers to participate to the extent anticipated. If these risks are not addressed, they could limit USPS’s ability to fully achieve the program’s benefits. USPS’s management approach to developing the Intelligent Mail program has lacked critical program management elements that are considered best practices. The lack of these elements may increase the program’s risk and raise questions about whether USPS will be able to meet deadlines or program objectives. Specific elements of an effective management approach that USPS lacks include a comprehensive strategic plan; information about program costs, including its anticipated savings or cost reductions; and a risk mitigation plan. In developing a large and complex effort such as the Intelligent Mail program, these key elements are particularly important, and USPS could benefit from best practices used by leading organizations. Best practices are tried and proven methods, processes, techniques, and activities that organizations define and use to minimize risks and maximize chances for success. Experience has shown that organizations that adopt and effectively implement best practices can reduce the risks associated with implementing projects, including information technology projects, to acceptable levels. For example, we have previously reported that using best practices related to information technology acquisitions can result in better outcomes—including cost savings, improved service and product quality, and ultimately, a better return on investment. Such practices have been identified by leading organizations such as the Software Engineering Institute, the Chief Information Officers Council, and in our prior work analyzing best practices in industry and government. Effective program management involves establishing and maintaining plans defining project scope and activities, including a budget and schedules, key deliverables, and milestones for key deliverables. An effective risk management process identifies potential problems before they occur, so that risk-handling activities may be planned and invoked as needed across the life of the product and project to mitigate adverse impacts on achieving objectives. Key activities include identifying and analyzing risks, assigning resources, developing risk mitigation plans and milestones for key mitigation deliverables, briefing senior-level managers on high priority risks, and tracking risks to closure. In a separate review begun in March 2009, we are assessing the cost, schedule, and performance status of the Intelligent Mail program and whether the Postal Service has the capabilities to successfully acquire and manage this program. To effectively manage major information technology programs, organizations should use sound acquisition and management processes to minimize risks and thereby maximize chances for success. Such processes include project and acquisition planning, requirements development and management, risk management, project monitoring and control. Our work has shown that such processes are significant factors in successful systems acquisitions and development programs, and they improve the likelihood of meeting cost and schedule estimates as well as performance expectations. USPS lacks an up-to-date comprehensive Intelligent Mail strategic plan to facilitate program management and accountability. A comprehensive plan or strategy can provide a program’s overall vision and goals, including detailed milestones and measures of success which provide meaningful guidance for planning and measuring progress. Such plans can also establish deadlines for achieving objectives and assigning responsibility for program implementation. USPS published an Intelligent Mail Corporate Plan in 2003, which described its overall vision for Intelligent Mail and three specific strategies for achieving this vision. USPS said that it would periodically update this plan; however, USPS has not provided periodic updates, despite making major changes to the Intelligent Mail program. For example, USPS has announced two implementation phases for Intelligent Mail—May 2009 and November 2009—but USPS is still defining key requirements for the November phase and possible future phases. Also, it is not clear when certain functions and the associated systems, such as automated mail verification, will be implemented. In other areas, USPS has developed comprehensive strategic plans that were periodically updated and that provided an overview of the major phases and activities that would be completed in each phase. For years, dating back to the 1990s, USPS developed and periodically updated its Corporate Automation Plan that identified its vision, goals, expected savings, and actions planned for each phase to achieve a completely barcoded and fully automated mail processing system. Similarly, USPS developed and updated its Corporate Flats Strategy that detailed the decision points and activities planned for the three major phases related to improving flat mail processing. USPS also lacks program cost information associated with Intelligent Mail, including a baseline and mechanism to track and measure actual savings. Having reliable cost estimates is critical to support management decisions about budget development, resource requirements, and allocation, as well as to measure performance. According to USPS, one of the key benefits of the Intelligent Mail program is to reduce operating costs, which are primarily workhour costs, by increasing the use of mail information to improve the efficiency of its automated mail processing operations. A senior USPS official told us that attributing efficiencies and costs savings directly to the Intelligent Mail program would be difficult because USPS is initiating numerous programs to reduce costs and would be unable to isolate and attribute cost savings only to the Intelligent Mail program. We recognize the difficulty of directly attributing costs, but USPS could measure how Intelligent Mail implementation affects two processes—mail acceptance and verification. As we mentioned earlier, USPS envisions that, by using Intelligent Mail, it can eventually automate its acceptance and verification processes and reduce its reliance on manual tests of the mail, making it easier for mailers to hand mail off, thus saving both USPS and mailers time and the related costs. Since these processes are directly affected by the implementation of Intelligent Mail, their associated costs and savings could be identified and attributed to the Intelligent Mail program. By tracking these costs, USPS could measure how the Intelligent Mail program actually reduces operating costs in these areas. Finally, USPS lacks a program-level risk mitigation plan—a plan that identifies and addresses potential weaknesses before they adversely affect the Intelligent Mail program. According to USPS officials, the Intelligent Mail program is the most complex effort initiated in USPS history and its successful implementation is important to the future of USPS. However, the program is vulnerable to several areas of risk that USPS has not addressed. For example, USPS has said that Intelligent Mail success is dependent on mailer participation in the Full Service option, but it has not stated how it would address the impact of lower than anticipated mailer participation. USPS has developed a process to identify and address technical risks related to, for example, integrating the Intelligent Mail system with existing USPS systems, but it has not developed a more strategic-level risk mitigation plan that discusses how it will address the key risk areas that could impact the program as a whole, such as lower- than-anticipated mailer participation, resource limitations or schedule delays. During the program’s development over the past 2 years, many mailers expressed their concerns regarding these risks in comments to the Federal Register, PRC, and in industry newsletters. In January 2008, USPS published the Intelligent Mail Advance Notice of Proposed Rulemaking in the Federal Register, which proposed implementing the Intelligent Mail program in January 2009. In April 2008, USPS publicized a revised Intelligent Mail Federal Register notice which pushed back the implementation date to May 2009 and proposed incentives for Full Service participants. Based on our review of the more than 460 comments submitted to USPS in response to these notices, the concerns cited by mailers included the following: USPS communication efforts were insufficient, and mailers had difficulty obtaining program information, including—until recently—the expected Full Service discount, which prevented mailers from determining their return on investment; mailer participation in Intelligent Mail will likely be affected by mailers who may not be able to use Intelligent Mail barcodes due to the technological challenge of printing the barcodes and storing all of the electronic information; USPS had not provided finalized information technology requirements, which impeded some company’s efforts to budget for or develop the necessary software; and USPS and mailers may not be ready for implementation given USPS’s short-time period in which to simultaneously design, develop, test, and implement the Intelligent Mail program. In August 2008, USPS announced its Intelligent Mail Final Rule which finalized the May 2009 implementation date and allowed mailers to use POSTNET barcodes until May 2011. According to a major industry newsletter published in August 2008, mailers remained concerned with USPS’s approach. They said they were unable to make the return on investment and justify the expense without a substantial price differential or, for some, the benefit of free Address Correction Service with less restrictive time limits. USPS has taken some steps to address readiness and mailer concerns regarding its management and its preparedness to implement the program. It delayed implementation of the program from January 2009 to May 2009. USPS has said it will also attempt to address mailer concerns about management of the program by reaching out to them. In this regard, USPS officials said they plan to mitigate implementation risks by working with each mailer to customize its transition from using POSTNET barcodes to Intelligent Mail barcodes. In addition, USPS also has undertaken a variety of communication efforts to provide mailers with updated program and technical information. For example, it established four different workgroups as part of its Mailers’ Technical Advisory Committee. Each of these workgroups, comprised of mailing industry representatives and USPS officials, seeks to resolve a specific issue and offer recommendations to USPS. USPS also has developed educational and training programs, such as the Intelligent Mail University, a 1-day comprehensive course. In addition, USPS has provided information through its traditional channels of communications, such as through its Postal Customer Council organization, at conferences, and on its Web site. Taken together, some of these USPS efforts could be considered best practices associated with effective program management; however, the lack of other critical program management elements may expose the project to unnecessary risk that it will not achieve its schedule and performance objectives. In addition to these risks, other factors also add to the program’s risk, as follows: The Intelligent Mail program is highly complex and involves multiple system integrations implemented concurrently in a short timeframe with limited testing before implementation, which could make the program susceptible to errors or unanticipated problems. The program continues to evolve as USPS defines requirements for Phase 2. Program implementation may require considerable time with USPS working directly with individual mailers to integrate their respective systems. For example, it might require up to 8 weeks for a mailer to gain approval from USPS to submit electronic documentation, during which the mailer and USPS work together to resolve any technical issues, according to a senior USPS official. Last fall we interviewed representatives from nine companies involved in the mailing industry that participated in the development of the Intelligent Mail program. We also talked to representatives from six commercial mailer trade associations who collectively represent most of the mail sent. Some of these mailers expressed frustrations with USPS’s approach because, they said, it appeared to lack planning and consistency, making it difficult for them to make their needed changes. Some mailers cited the lack of an overall plan with dates, which caused difficulty with their internal planning and resource assignment. Without such a plan, it appeared that USPS was simultaneously making decisions and implementing the program. For example, USPS announced it would provide free Address Correction Service to Full Service participants, and then subsequently announced a time limit for mailers to use information and implement address changes. Mailers said that the time limitation made it practically impossible to utilize the service. Other mailers said that, even though Intelligent Mail required substantial software changes and development, USPS continued to make changes to program technical requirements and specifications, making it difficult for them to respond in time for the planned implementation date. Another risk to the success of the Intelligent Mail program is that mailers may not choose to participate in Intelligent Mail. Based on our interviews and review of other industry and USPS documentation, the Intelligent Mail Full Service option’s pricing and other incentives may not be sufficient to convince some mailers to participate. USPS officials have told us that the success of Intelligent Mail is dependent upon mailers participating in the Full Service option. Thus, if mailers decline to participate, the program has a reduced chance of succeeding. Some mailers have said that the program’s pricing and benefits are not enough to provide sufficient incentives or even to recover their investments. For example, a mailer association said its members had hoped for a larger discount than USPS announced, considering the large investments some companies had made in preparing for the Intelligent Mail program. A mailer told us that in order for the company to recover its costs, the discount would have to be one-half of 1 cent per mail piece, or much higher than the announced price incentives. Costs for mailers preparing for Full Service vary largely depending on the size of the mailer. For example, some large mailers said they invested millions of dollars to update and purchase hardware and software, while some smaller mailers expected to invest tens of thousands of dollars. Some mailers also expressed their concern about USPS’s delay in offering the price incentives. USPS delayed the effective date of the price incentives from May 2009 to November 2009. Although USPS said it did so because it does not want to punish mailers who do not adopt Full Service immediately, some mailers who were planning on implementing Intelligent Mail in May viewed this 6-month delay as problematic because of the increased time to recover their implementation costs. Finally, some mailers expressed concerns about the duration of the discount after USPS announced it intends to offer the price incentives on a temporary basis. According to USPS, these price incentives are not expected to become a permanent part of its pricing schedule, meaning the incentives would likely be phased out. USPS and mailers view the financial incentives differently. According to USPS, the price incentives are one of several benefits to encourage mailers to participate in Full Service, while many mailers view the financial incentives as the main benefit of Intelligent Mail. In addition to concerns about financial incentives, mailers find Intelligent Mail complex. USPS requires mailers to greatly change the way they prepare and submit their mail in order to participate in Full Service and mailers say these changes may discourage them from adopting it. A mailing industry consultant wrote in a mailing association’s newsletter in March 2009 that mailers should just sit back and wait until the “dust and dollars” settle before participating in Full Service because the benefits provided by Intelligent Mail are not worth the required effort or investment. Other Intelligent Mail benefits offered by USPS may not appeal to some mailers based on their various business needs and, thus, may not motivate them to participate. For example, magazine mailers told us they may benefit from receiving free Address Correction Service—a service which provides information to mailers when recipients move. Periodical mailers, including magazines, are currently required by USPS to use Address Correction Service and must pay $.25 each time they are notified of an address change. By adopting Full Service, these mailers would receive, at no additional cost, a service they are currently paying for. However, a newspaper association representative said that none of the Intelligent Mail incentives will benefit small newspaper publishers who enter newspapers at local postal delivery facilities, thus bypassing USPS mail processing where the Intelligent Mail program information is generated. Additionally, some mailers viewed the time frame to incorporate updated address information into subsequent mailings as too short. For example, under Address Correction Service, the mailers have 30 days to update new address information without risk of financial penalty, but one mailer told us more time is needed because its mailings are sometimes prepared weeks in advance. If mailers do not update the information within the required timeframe, mailers might incur additional mailing costs from penalties assessed by USPS. Mailers are facing other pressures that could affect their decisions to participate in Full Service, including a recession that has affected their businesses and additional postal requirements. Some mailers are reducing the amount of mail pieces sent out due to a worsening economy. For example, advertising mail in fiscal year 2008 was adversely affected by the economy—particularly credit card, mortgage, and home equity solicitations—as well as the continued shift from mail to electronic communication. Mailers also face other, additional USPS requirements unrelated to Intelligent Mail but which coincide with its implementation. These unrelated requirements may affect mailer participation in Full Service. Within the last year, USPS required mailers to simultaneously implement several programs, including changes to the standards for preparing some mail and the frequency that mailers must update their address information. A mailer, referring to Intelligent Mail and other new requirements, wrote in an association newsletter that USPS will permanently lose customers if it continues to make mailing more difficult and complex by creating new requirements and increasing prices. Thus, due to costs incurred to implement Intelligent Mail, the reduction in mail volume, and additional postal requirements, some mailers have questioned the value the Intelligent Mail program will add to their businesses. USPS says that the incentives it is offering to encourage mailers to participate in the Full Service option are appropriate to get Full Service adoption started and recognize the investments mailers must make to implement Intelligent Mail. According to USPS, the value of Intelligent Mail lies in the enhanced value of the information it provides, and not only in any discount that may accompany its introduction. Furthermore, USPS said it could increase the incentive later if the adoption rate is too low. USPS also points out that, in addition to the financial discount, Full Service mailers will have access to Address Correction Service and Start the Clock information. A senior USPS official told us that the value of these other services should provide enough benefit for most mailers to justify the expense of implementing Intelligent Mail. However, according to a mailer association, many mailers placed a higher value on the discount, calling it the main incentive to participate in Full Service. Without sufficient numbers of customers participating in Intelligent Mail, USPS may not realize several benefits of the program. Specifically, USPS may not realize the intended long-term benefit of discontinuing its existing, manual acceptance and verification process for mailers who use an automated one for Intelligent Mail. Similarly, USPS’s ability to improve customer service by providing tracking information on individual mail pieces will be limited. With this improved tracking capability, USPS could identify problem areas in its processing and delivery of a mail piece, while mailers could determine the reasons behind the delivery of late mail, including an incorrect address or a plant delay. These services to customers, however, are only available through the Full Service option because mail pieces in Basic Service are not given unique tracking numbers. Further, it is not clear what Intelligent Mail information will be provided to Full Service customers and what price USPS may charge for this information. Finally, USPS’s ability to meet its statutory requirement to measure and report on how well it is meeting its overall delivery performance standards could be hindered by low mailer adoption rates. The 2006 Postal Accountability and Enhancement Act required USPS to report on the speed and reliability of delivery for each market-dominant product. According to PRC, the mail data USPS will use to measure its performance must be representative in order to produce meaningful results. However, USPS’s measurement system only measures the performance of Full Service mail (and not Basic Service mail), which may result in a nonrepresentative sample. PRC views mailer adoption of Full Service Intelligent Mail as critical to producing accurate performance measures from data representative of a cross section of mail and notes that mailer uncertainty about Intelligent Mail requirements, implementation dates and discount rates may delay adoption. PRC has stated that it will monitor Intelligent Mail implementation to assure that accurate and representative data are obtained by requiring USPS to report quarterly on its Intelligent Mail implementation progress. USPS is implementing a program to enable it to have much greater insight into the mail but preparations for Intelligent Mail require considerable work by both USPS and many of its commercial customers. The management approach USPS is taking has several key risks that have raised concerns about whether USPS will be able to implement the program on schedule and with all program functions in place. USPS does not have a comprehensive strategy that includes information about all the phases planned, the numerous functions and systems upgrades included, when they will be implemented, the program’s goals, the baseline costs, and expected cost savings. Consequently, it will be difficult for USPS to measure Intelligent Mail’s performance or to account for its results. Overall, at the program level, key risks include the uncertainty about whether mailers will find the incentives offered by USPS appealing enough to participate in the program, resource limitations, and schedule delays. Although USPS is aware of these risks, it has no plan for dealing with them should these potential problems materialize. As a result, the implementation efforts are at risk of taking longer and costing more and achievement of the program’s intended benefits may be delayed. To help ensure that USPS addresses these risks to the successful implementation of Intelligent Mail, we recommend that the Postmaster General take the following three actions: (1) develop a comprehensive Intelligent Mail strategic plan that defines all planned phases and their associated functions and systems and includes program goals and measures of success; (2) develop cost and savings information for the activities that can be attributed to the Intelligent Mail program, including the baseline and metrics to be used to track cost savings achieved; and (3) develop a plan that addresses how USPS will mitigate program-level risks, including the implications of lower-than-anticipated customer adoption of the Full Service Intelligent Mail option, resource limitations, and schedule delays. The U.S. Postal Service provided written comments on a draft of this report in a letter from the Senior Vice President, Intelligent Mail and Address Quality dated April 27, 2009. These comments are reproduced in appendix III, and our evaluation of them is summarized below. Based on the comments provided, we made minor modifications to some portions of this report. USPS agreed with our findings that it lacked an up-to-date comprehensive Intelligent Mail strategic plan and a program-level risk mitigation plan. It agreed to implement our first recommendation—to develop a comprehensive Intelligent Mail strategy, including all planned phases and the associated functions and systems, program goals, and measures of success—and our third recommendation—to develop a plan that addresses how it will mitigate risks, including the implication of lower- than-anticipated customer adoption of the Full Service Intelligent Mail option. USPS did not agree with our finding that it lacks program cost information, including an estimate of overall Intelligent Mail program costs and a capability to measure savings, and did not agree to fully implement our second recommendation, that it develop cost information. USPS said it will implement our first recommendation by completing an update to its 2003 Intelligent Mail Corporate Plan within weeks. It said the update will detail (1) efforts completed, (2) implementation plans for its two planned phases, (3) items to be included for a possible third phase, (4) a vision of future upgrades, and (5) enabling features and capabilities on Intelligent Mail. This will facilitate improved program management and accountability. However, USPS did not commit to defining program goals and measures of success, which we believe are critical components of a comprehensive strategic or Corporate Plan. USPS did not agree to fully implement our second recommendation because it said that it has detailed program cost information and that costs are being closely managed and monitored. In addition, USPS generally disagreed with our finding that it should develop metrics to measure cost savings associated with its Intelligent Mail effort. USPS said that although the Intelligent Mail program would provide benefits that should reduce costs, as well as improve efficiency and service, it did not anticipate a specific cost or other benefit from its Intelligent Mail investment, in part because these costs or benefits could not be measured. USPS explained that there was no sound financial method to specifically attribute cost reductions to Intelligent Mail when it is also implementing other efforts to reduce costs. We recognize the difficulty of directly attributing costs to the Intelligent Mail program and agree with USPS that the activities associated with the acceptance and verification of commercial mail are most directly related to the Intelligent Mail program. USPS said that it already has a baseline and mechanism in place to track the cost and work hours associated with these activities. Although USPS did not provide us with this baseline, we agree that measuring and tracking the costs and savings associated with the acceptance and verification activities would be the most directly attributable performance indicator. We have modified our recommendation accordingly, as we continue to believe that cost and savings information is critical to provide USPS management with a means for measuring the outcome of its Intelligent Mail efforts. USPS could address our recommendation by including a discussion of its baseline and cost tracking mechanism in its Corporate Plan. USPS agreed to implement our third recommendation by exploring the risk analysis and potential mitigation required if mailer adoption rates fall significantly below expectation, either as a separate document or as a part of the Intelligent Mail Corporate Plan. In response to our finding that it did not have a risk mitigation plan, USPS said that, at a technical level, the Intelligent Mail program has an extremely detailed risk mitigation plan that outlines both the process to identify risks and approaches to mitigate these risks. However, USPS also acknowledges that the risk associated with low mailer adoption is a valid program-level concern. We believe that including a discussion of how USPS plans to address key risks in an updated Corporate Plan is appropriate. Our recommendation was not limited only to risks associated with mailer adoption, and we continue to believe that USPS should identify and address other risks at the program level, such as resource limitations or schedule delays, and also include them in its Corporate Plan. We are sending copies of this report to the Chairman and Ranking Member of the House Committee on Oversight and Government Reform; the Ranking Member of its Subcommittee on Federal Workforce, Postal Service, and the District of Columbia; the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs; the Chairman and Ranking Member of its Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security; the Postmaster General; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at [email protected] or by telephone at (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. This report addresses (1) what the Intelligent Mail program is and the status of the U.S. Postal Service’s (USPS) implementation efforts and (2) the key risks to implementing Intelligent Mail and how USPS has addressed them. To address the first objective, we obtained documentation from and interviewed USPS officials involved in developing and managing the Intelligent Mail program. Documented information from USPS included annual reports, comprehensive operating statements, relevant decision analysis reports, quarterly investment highlights, and Intelligent Mail federal register notices, technical guides, readiness plans, schedules, presentations, and training material. Documented information from the Postal Regulatory Commission (PRC) included USPS proposals and responses, PRC questions and rulings, and public comments. We also coordinated with the USPS Office of Inspector General (OIG), who was conducting at least two audits of the Intelligent Mail program at the same time as our review, and reviewed reports resulting from these audits, released in March 2009, in addition to other OIG reports. To determine key changes required for USPS and mailers to prepare for the Intelligent Mail program, we compared USPS current and proposed changes to mail acceptance, verification, and processing and toured USPS and mailer facilities, including a presort and printing facility. To address the second objective—risks to implementing the Intelligent Mail program and how USPS addressed them—and to obtain the perspective of mailers, we interviewed representatives from 9 companies involved in the mailing industry who participated in the development of the Intelligent Mail program. We contacted these companies directly or through trade associations. We also talked to representatives from six commercial mailer trade associations who collectively represent most of the mail sent. For example, one association said its members, which consist of over 50 profit and nonprofit organizations and major mailing associations, generate 70 percent of all mail. Documented information included association newsletters, written minutes of meetings held with USPS and letters written by the associations to USPS or PRC officials. We attempted to interview mailers from small companies to obtain their perspective on the Intelligent Mail program, but we were told by the largest mailer association—representing both large and small and the greatest cross section of commercial mailers—that smaller companies were not as familiar with the Intelligent Mail program and, thus, were reluctant to talk to us. USPS officials told us that to comply with Intelligent Mail standards, many smaller companies would either rely on vendors, such as software vendors, to provide software updates that included Intelligent Mail capabilities or pay larger mail preparation companies to prepare their mail. We also obtained and reviewed information from the Mailer’s Technical Advisory Committee, a venue for USPS to share information with mailers and receive advice and recommendations from workgroups established by the group’s leadership, which is comprised of USPS and mailing industry officials. We reviewed documentation from the Mailer’s Technical Advisory Committee, including written meeting minutes, and documentation of issues that workgroups have identified and are working on or have resolved and monitored weekly teleconferences held by individual workgroups. We attended a local meeting in Dallas, Texas of a Postal Customer Council, which is a national program comprised of over two hundred local-level councils that provide a forum for mailers to exchange ideas for improved mail service and discuss new and existing USPS products, programs, regulations, and procedures. At the Postal Customer Council, we participated in Intelligent Mail presentations given by USPS and observed presentations given by top USPS leadership. We also attended a presentation of USPS’s Intelligent Mail University, a 1-day USPS training course in Washington, D.C. for mailers. To identify risks associated with USPS’s Intelligent Mail program management approach, we worked with GAO’s Information Technology team which provided us analytical guidance in identifying relevant criteria. We used criteria based on the practices of leading organizations, such as the Software Engineering Institute and the Chief Information Officer’s Council, to effectively manage major programs and minimize risks. In addition, we used criteria identified in the GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs (GAO-09-3SP). We compared selected criteria to information and documentation obtained from USPS to identify areas where USPS’s management approach for Intelligent Mail did not match criteria for sound risk management. In order to provide as broad a perspective as possible on mailer concerns with USPS’s implementation of the Intelligent Mail program, we analyzed comments submitted in response to the January and April 2008 USPS Federal Register notices regarding Intelligent Mail, primarily by commercial mailers and mailer associations. Specifically, we reviewed two sets of mailer comments to two Federal Register notices: (1) the Implementation of New Standards for Intelligent Mail Barcodes, Advance Notice of Proposed Rulemaking (Advance Notice) in January 2008, 73 Fed. Reg. 1158 (Jan. 7, 2008) and (2) the Implementation of New Standards for Intelligent Mail Barcodes, Notice of Proposed Rulemaking (Proposed Rule) in April 2008, 73 Fed. Reg. 23393 (Apr. 30, 2008). We took a different analytical approach with each set of comments. Due to the large number of comments USPS said it received in response to its Advance Notice (nearly 400 written comments) and the intended use of the results of the analysis, we determined the most appropriate method to analyze these comments was to verify and validate USPS’s summary of these comments as published in the Proposed Rule. In the Proposed Rule, there were 16 statements made by USPS in summarizing the comments. We reviewed all of the original comments and found evidence of comments supporting each of the 16 statements. For the second Federal Register notice, the Proposed Rule, USPS said it received 67 sets of comments. We determined the most appropriate method to analyze these comments was to conduct a complete content analysis by reviewing and categorizing all 67 comments. We conducted this performance audit from September 2008 to April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. companies and the Deputy Postmaster General of USPS, recommended Intelligent Mail as a way to respond to customer’s needs USPS established Intelligent Mail and Address Quality organization to identify and shepherd efforts to develop USPS published Intelligent Mail Corporate Plan, which established the vision of the program USPS finalized the format for the Intelligent Mail barcode USPS permitted mailers to begin using the Intelligent Mail barcode on letter mail Postal Accountability and Enhancement Act signed into law, requiring USPS to report on its service USPS announced that Intelligent Mail program will be fully operational for all commercial mailers by 2009 USPS published new specifications for the Intelligent Mail barcode In Federal Register Advance Notice of Proposed Rulemaking, USPS proposed to require mailers that get automation prices to use Intelligent Mail starting January 2009 In Federal Register Proposed Rule, USPS revised standards and proposed that mailers will be eligible to use Intelligent Mail and receive incentives for using Full Service starting May 2009 The USPS Board of Governors approved funding to create an infrastructure that will facilitate Intelligent Mail implementation In Federal Register Final Rule, USPS announced that it will allow POSTNET barcodes until May 2011 and that it will start offering Intelligent Mail Basic and Full Service in May 2009 USPS announced price incentives for Intelligent Mail Full Service USPS will make Intelligent Mail available for mailers to start implementing and testing their systems Price incentives for Full Service mailers will go into effect and USPS plans to add other program functions All mailers who get price incentives for using barcodes will be required to use Intelligent Mail barcodes Pub. L. No. 109-435, 120 Stat. 3198 (2006). In addition to the contact named above, Teresa Anderson, Lauren Calhoun, Wendy Dye, Brandon Haller, Emily Larson, Amy Rosewarne, and Travis Thomson made key contributions to this report.
|
Over 80 percent of the approximately 200 billion mail pieces processed and delivered by the U.S. Postal Service (USPS) last year was sent by commercial mailers that barcode, sort, or transport mail to get lower postage rates. Starting in May 2009, USPS will encourage these mailers to use new barcodes that have increased capabilities as part of Intelligent Mail, a new program. According to USPS, Intelligent Mail is the most complex change it has ever undertaken. GAO was asked to describe (1) the Intelligent Mail program and the status of implementation efforts and (2) the key risks to implementing Intelligent Mail and how USPS is addressing these risks. GAO reviewed USPS and regulatory documents, public comments, and interviewed USPS officials, mailers, and mailer representatives involved in developing this program. The Intelligent Mail program is a USPS effort to encourage commercial mailers to use standardized barcodes that will improve the ability to track mail. The program is centered on a new barcode that can uniquely identify a mail piece. While Intelligent Mail could provide benefits to both mailers and USPS, it will also require both to make significant changes to their processes and information systems. USPS expects to be prepared to begin implementation in May 2009. After that, USPS will phase in price incentives and other functions in November 2009 and will require mailers to use the new barcode by May 2011 to qualify for lower postage rates. Successful implementation of Intelligent Mail faces two key risks--(1) USPS's management approach and (2) mailers may not choose to participate in the program--which if not addressed, could limit achieving Intelligent Mail benefits. USPS has taken some steps to address these risks, such as a phased approach. However, USPS has not followed some key program management practices to reduce risks, raising questions about whether USPS and mailers will be able to meet schedule and program objectives. For example, USPS (1) lacks a comprehensive strategy, including all planned phases and the specific functions and systems to be implemented in each phase; goals and measures of success; and a risk mitigation plan to address the risks that could impact the Intelligent Mail program as a whole; and (2) lacks information on costs and savings attributable to the Intelligent Mail program, including a baseline and mechanism to track and measure actual savings, which are needed to measure program performance. The second risk is that program success is dependent on mailer participation, and it is uncertain whether pricing and other incentives will encourage mailers to participate to the extent anticipated. Some mailers have said they find the pricing incentives insufficient to recover their investment in the program. The Postal Regulatory Commission has also noted that uncertainty may lead mailers to delay adoption. Low mailer adoption could affect USPS's ability to report representative delivery service results, as required to comply with service performance reporting requirements, but USPS has not said how it would address this risk.
|
Today, federal employees are issued a wide variety of identification (ID) cards, which are used to access federal buildings and facilities, sometimes solely on the basis of visual inspection by security personnel. These cards often cannot be used for other important identification purposes—such as gaining access to an agency’s computer systems—and many can be easily forged or stolen and altered to permit access by unauthorized individuals. In general, the ease with which traditional ID cards—including credit cards—can be forged has contributed to increases in identity theft and related security and financial problems for both individuals and organizations. One means to address such problems is offered by the use of smart cards. Smart cards are plastic devices about the size of a credit card that contain an embedded integrated circuit chip capable of both storing and processing data. Figure 1 shows a typical example of a smart card. The unique advantage of smart cards—as opposed to cards with simpler technology, such as magnetic stripes or bar codes—is that smart cards can exchange data with other systems and process information rather than simply serving as static data repositories. By securely exchanging information, a smart card can help authenticate the identity of the individual possessing the card in a far more rigorous way than is possible with simpler traditional ID cards. A smart card’s processing power also allows it to exchange and update many other kinds of information with a variety of external systems, which can facilitate applications such as financial transactions or other services that involve electronic record-keeping. Smart cards can also be used to significantly enhance the security of an organization’s computer systems by tightening controls over user access. A user wishing to log on to a computer system or network with controlled access must “prove” his or her identity to the system—a process called authentication. Many systems authenticate users by merely requiring them to enter secret passwords, which provide only modest security because they can be easily compromised. Substantially better user authentication can be achieved by supplementing passwords with smart cards. To gain access under this scenario, a user is prompted to insert a smart card into a reader attached to the computer as well as type in a password. This authentication process is significantly harder to circumvent because an intruder would need not only to guess a user’s password but also to possess the same user’s smart card. Even stronger authentication can be achieved by using smart cards in conjunction with biometrics. Smart cards can be configured to store biometric information (such as fingerprint templates or iris scans) in electronic records that can be retrieved and compared with an individual’s live biometric scan as a means of verifying that person’s identity in a way that is difficult to circumvent. A system requiring users to present a smart card, enter a password, and verify a biometric scan provides what security experts call “three-factor” authentication, the three factors being “something you possess” (the smart card), “something you know” (the password), and “something you are” (the biometric). Systems employing three-factor authentication are considered to provide a relatively high level of security. The combination of smart cards and biometrics can provide equally strong authentication for controlling access to physical facilities. Smart cards can also be used in conjunction with public key infrastructure (PKI) technology to better secure electronic messages and transactions. A properly implemented and maintained PKI can offer several important security services, including assurance that (1) the parties to an electronic transaction are really who they claim to be, (2) the information has not been altered or shared with any unauthorized entity, and (3) neither party will be able to wrongfully deny taking part in the transaction. Security experts generally agree that PKI technology is most effective when deployed in conjunction with smart cards. In addition to enhancing security, smart cards have the flexibility to support a wide variety of uses not related to security. A typical smart card in use today can store and process 16 to 32 kilobytes of data, while newer cards can accommodate 64 kilobytes. The larger the card’s electronic memory, the more functions can be supported, such as tracking itineraries for travelers, linking to immunization or other medical records, or storing cash value for electronic purchases. Smart cards are grouped into two major classes: contact cards and “contactless” cards. Contact cards have gold-plated contacts that connect directly with the read/write heads of a smart card reader when the card is inserted into the device. Contactless cards contain an embedded antenna and work when the card is waved within the magnetic field of a card reader or terminal. Contactless cards are better suited for environments where quick interaction between the card and the reader is required, such as high- volume physical access. For example, the Washington Metropolitan Area Transit Authority has deployed an automated fare collection system using contactless smart cards as a way of speeding patrons’ access to the Washington, D.C., subway system. Smart cards can be configured to include both contact and contactless capabilities, but two separate interfaces are needed because standards for the technologies are very different. Since the 1990s, the federal government has considered the use of smart card technology as one option for electronically improving security over buildings and computer systems. In 1996, GSA was tasked with taking the lead in facilitating a coordinated interagency management approach for the adoption of multiapplication smart cards across government. The tasking came from OMB, which has statutory responsibility to develop and oversee policies, principles, standards, and guidelines used by agencies for ensuring the security of federal information and systems. To make it easier for federal agencies to acquire commercial smart card products, GSA developed the governmentwide Smart Card Access Common ID contracting vehicle, which also specified adherence to the government smart card interoperability specification that NIST developed in collaboration with smart card vendors. In 2003, OMB, in accordance with the President’s vision of creating a more responsive and cost-effective government, issued a memorandum to federal chief information officers outlining details of the E-Authentication E-Government initiative on authentication and identity management. OMB also created the Federal Identity Credentialing Committee (FICC) to make policy recommendations and develop the Federal Identity Credentialing component of the Federal Enterprise Architecture, to include services such as identity proofing and credential management for the federal government. In February 2004, FICC issued policy guidance on the use of smart card–based systems in badge, identification, and credentialing systems with the objective of helping agencies plan, budget, establish, and implement credentialing and identification systems for government employees and their agents. In our January 2003 report on smart cards, we made recommendations to OMB, NIST, and GSA. Specifically, we recommended that the Director, OMB, issue governmentwide policy guidance regarding adoption of smart cards for secure access to physical and logical assets; the Director, NIST, continue to improve and update the government smart card interoperability specification by addressing governmentwide standards for additional technologies—such as contactless cards, biometrics, and optical stripe media—as well as integration with PKI; and the Administrator, GSA, improve the effectiveness of its promotion of smart card technologies within the federal government by (1) developing an internal implementation strategy with specific goals and milestones to ensure that GSA’s internal organizations support and implement smart card systems consistently; (2) updating its governmentwide implementation strategy and administrative guidance on implementing smart card systems to address current security priorities; (3) establishing guidelines for federal building security that address the role of smart card technology; and (4) developing a process for conducting ongoing evaluations of the implementation of smart card–based systems by federal agencies to ensure that lessons learned and best practices are shared across government. To date, all three agencies have taken actions to address the recommendations made to them. In response to our recommendation, OMB issued a July 3, 2003, memorandum to major departments and agencies directing them to coordinate and consolidate investments related to authentication and identity management, including the implementation of smart card technology. NIST has responded by improving and updating the government smart card interoperability specification to address additional technologies, including contactless cards and biometrics.GSA responded to our recommendations by updating its “Smart Card Policy and Administrative Guidance” to better address security priorities, including minimum security standards for federal facilities, computer systems, and data across the government. However, three of our four recommendations to GSA are still outstanding. GSA officials stated that they are working to address the recommendations to develop an internal GSA smart card implementation strategy, develop a process for conducting evaluations of smart card implementations, and share lessons learned and best practices across government. The responsibility for one recommendation— establishing guidelines for federal building security that address the role of smart card technology—was transferred to DHS. In January 2003, we reported that 18 federal agencies were planning, testing, operating, or completing 62 smart card projects. These projects varied widely in size and technical complexity, ranging from small-scale, limited-duration pilot projects to large-scale, agencywide initiatives providing multiple services. The projects were reported in varying stages of deployment. Specifically, 17 projects were listed as operational, 13 projects were in the planning stage, and 7 were being piloted. In addition, 10 were reported at that time as having been completed or discontinued for various reasons. No information was provided about the project phase of the remaining 15 initiatives. In responding to our survey regarding the 52 projects listed as ongoing in our previous report, agencies reported that as of June 2004, 28 had been terminated. Of the remaining projects, 11 were operational, 5 were in the planning or pilot phase, and agencies did not provide current information on 8. The operational and planned projects consist mostly of large-scale projects intended to provide identity credentials to an entire agency’s employees or other large groups of individuals. Figure 2 shows the current status of the 52 federal smart card projects that were previously reported as continuing. Table 1 provides summary information on the status of individual projects, providing reasons for any terminations. Agencies reported that the majority (28) of the above projects had been terminated since our last review was conducted. According to agency officials, reasons for termination were primarily that the projects were absorbed into other smart card projects or were deemed no longer feasible. For example, DOD terminated 14 of 26 previously reported projects by substituting functionality provided by two large-scale smart card projects, the Common Access Card (CAC) and the EZPay (a project that was not previously reported). DOD’s CAC card is to be used to authenticate the identity of nearly 3.5 million military and civilian personnel and to improve security over online systems and transactions. The EZPay program is a stored-value card given to recruits at training installations to accelerate the processing time and thus maximize training time. Table 2 provides further details on the remaining 16 ongoing projects. As the table shows, 12 of these are large-scale projects. Agencywide smart card projects are ongoing at NASA and the Departments of Defense, the Interior, State, and the Treasury. These and other large projects will serve populations ranging up to 6 million. The cards will be used for identity credentials, physical access to buildings, logical access to computer systems, and stored value. The remaining 4 projects are used for similar purposes. However, they are smaller in scale, serving populations ranging from 612 to 3,100 individuals. For example, the Interior’s Minerals Management Service is planning a smart card program for use as identity credentials, and physical and logical access for about 2,100 employees. In response to our survey, agency officials reported 8 additional smart card projects that were ongoing at the time of our last review but not previously reported. Four of the 8 projects were planned for multiple applications such as identity credentials and physical and logical access. The remaining 4 projects were planned for single applications such as stored value, logical access to computer systems and networks, or processing travel documents. Figure 3 shows the number of these projects by the type of applications planned and the stage of reported deployment. Table 3 provides more detailed status information on these projects. In response to our survey, agencies reported 10 smart card projects that were initiated since our last review was completed. Based on these reported projects, more agencies are using GSA’s Smart Card Access Common ID contracting vehicle to acquire smart card technology. The 10 new projects identified in response to our survey vary in size, scope, and stage of deployment: planning, pilot, and operational. All of the projects are planned for multiple applications such as identity credentials and physical and logical access. Figure 4 shows the number of these projects by the type of application planned and the stage of reported deployment. These 10 projects vary widely in size, including small-scale projects— involving smart cards issued to as few as 126 cardholders—as well as much larger scale initiatives. For example, Department of Labor officials reported that the Employment and Training Administration physical access control smart card was issued to 126 federal employees and contractors as of December 2003. This card is operational and will be issued to 175 cardholders when fully deployed; it is used for identity credentials and physical access to buildings and other facilities. In contrast, VA plans to issue an estimated 500,000 smart cards to employees and contractors under its Authentication and Authorization Infrastructure Project. Through this initiative, smart cards will be used for identity credentials, accessing buildings or other facilities, and accessing computer systems. Production began in July 2004. Another example of a large-scale project is GSA’s Nationwide ID card. GSA plans to issue cards to 61,000 federal employees, contractors, and tenant agencies. Using this card, GSA plans to implement nationwide uniform credentials based on smart card technology by providing a single standard credential card for identification, building access, property management, and other applications. Table 4 provides status information on the 10 recently initiated smart card projects. GSA developed the Smart Card Access Common ID contracting vehicle to help make it easier for federal agencies to acquire commercial smart card products and services. According to the director of GSA’s Center for Smart Card Solutions, further guidance is planned that will require agencies to use the contracting vehicle or provide justification for not using it. The Director also stated that using GSA’s contract should help reduce the cost of smart cards and ensure that vendors incorporate interoperability specifications. Between December 2004 and December 2008, five agencies—including NASA and the Departments of Defense, Homeland Security, the Interior, and Veterans Affairs—are planning to make an aggregated purchase of up to 40 million cards through the GSA contract. As a part of this purchase, these agencies are scheduled to begin making quarterly procurements beginning in December 2004 of approximately 1.2 million cards. In response to our survey, the majority of the agencies (4 of 7) that reported new initiatives told us that they purchased smart cards under the GSA contract. The remaining agencies cited reasons for not acquiring smart cards under the GSA contract, such as purchase arrangements with another agency or purchases under other types of contracts. Agencies continue to move towards integrated agencywide initiatives that use smart cards as identity credentials that agency employees can use to gain both physical access to facilities, such as buildings, and logical access to computer systems and networks. In some cases, additional functions, such as asset management and stored value, are also being included. Nine agencies reported such projects: 4 of these were reported in our prior report, and 5 are recently initiated efforts. These projects are in various stages of deployment. One of the largest agencywide efforts is DHS’s identification and credentialing project. The agency plans to issue 250,000 cards to employees and contractors. This is a comprehensive identification and credentialing effort that will use PKI technology for logical access and proximity chips for physical access. Authentication will rely on biometrics with a personal identification number as a backup. Other recently initiated agencywide smart card projects include GSA’s Nationwide Identification, VA’s Authentication and Authorization Infrastructure Project, and the Department of Labor’s E-Authentication project. Table 5 summarizes both previously reported and recently initiated agencywide smart card efforts. We received oral comments on a draft of this report from GSA’s Associate Administrator, Office of Governmentwide Policy, and from officials of OMB’s Office of Information and Regulatory Affairs and its Office of General Counsel. Both GSA and OMB generally agreed with the content in the draft report. In addition, each agency provided technical comments, which have been addressed where appropriate in the final report. We will provide copies of this report to the Director of OMB and the Administrator of GSA, and the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions on matters contained in this report, please contact me at (202) 512-6240 or John de Ferrari, Assistant Director, at (202) 512-6335. We can also be reached by e-mail at [email protected] and [email protected], respectively. Other key contributors to this report were Tonia Brown, Barbara Collier, Felipe Colón, Pamlutricia Greenleaf, and Joel Grossman. Our objectives were to (1) determine the current status of smart card projects under way at the time of our last review, (2) identify and determine the status of projects initiated since our last review was completed, and (3) identify integrated agencywide smart card projects that are currently under way. To address these objectives, we developed a questionnaire and surveyed 24 federal agencies. These included agencies that are subject to the provisions of the Chief Financial Officers Act as well as the Department of Homeland Security. The survey included the 18 agencies pursuing smart card projects that were identified in our previous report. The practical difficulties of conducting any survey may introduce errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such errors. We analyzed information obtained through the survey to develop summary results and identify trends. To ensure the reliability of the information reported through the survey, we obtained available supporting documentation—such as project plans and descriptions—to verify (1) reported planning and implementation dates and (2) the numbers of smart cards issued as of December 31, 2003, or planned for issuance. As needed, we conducted follow-up interviews with agency officials responding to the survey to further ensure that the information provided was current and accurate. In addition, we contacted GSA officials to discuss agencies’ use of the Smart Card Access Common ID contract and other governmentwide implementation issues. We performed our work in Washington, D.C., and Atlanta, Georgia, between November 2003 and July 2004, in accordance with generally accepted government auditing standards. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
|
Smart cards--plastic devices about the size of a credit card--use integrated circuit chips to store and process data, much like a computer. Among other uses, these devices can provide security for physical assets and information by helping to verify the identity of people accessing buildings and computer systems. They can also support functions such as tracking immunization records or storing cash value for electronic purchases. Government adoption of smart card technology is being facilitated by the General Services Administration (GSA), which has implemented a governmentwide Smart Card Access Common ID contract, which federal agencies can use to procure smart card products and services. GAO was asked to update information that it reported in January 2003 on the progress made by the federal government in promoting smart card technology. Specific objectives were to (1) determine the current status of smart card projects identified in GAO's last review, (2) identify and determine the status of projects initiated since the last review, and (3) identify integrated agencywide smart card projects currently under way. To accomplish these objectives, GAO surveyed the 24 major federal agencies. In commenting on a draft of this report, officials from GSA and the Office of Management and Budget generally agreed with its content. According to GAO's survey results, as of June 2004, more than half of the smart card projects previously reported as ongoing (28 out of 52) had been discontinued because they were absorbed into other smart card projects or were deemed no longer feasible. Of the remaining 24 projects, 16 are in planning, pilot, or operational phases and are intended to support a variety of uses (agencies did not provide current information for 8 projects). Twelve of the 16 projects are large-scale projects intended to provide identity credentials to an entire agency's employees or other large group of individuals. For example, the Department of Defense's (DOD) Common Access Card is to be issued to an estimated 3.5 million DOD-related personnel, and the Transportation Security Administration's Transportation Worker Identification Credential is to be used by an estimated 6 million transportation industry workers. The other 4 projects are smaller in scale, and are intended to provide access or other services to limited groups of people. For example, the Department of Commerce's Geophysical Fluid Dynamics Laboratory Access Card is to be issued to about 612 employees, contractors, and research collaborators. Further, in response to the survey, agencies reported 8 additional smart card projects that were ongoing at the time of the last review. These projects include 4 planned for multiple applications (such as identity credentials and access) and 4 for single applications, including stored value, access to computer systems, and processing travel documents. Based on GAO's survey of federal agencies, 10 additional smart card projects have been initiated since the last review. These projects vary widely in size and scope. Included are small-scale projects, involving cards issued to as few as 126 cardholders (such as a project in the Department of Labor's Employment and Training Administration), and large-scale agencywide initiatives, such as the Department of Veterans Affairs Authentication and Authorization Infrastructure card, which is to be issued to an estimated 500,000 employees and contractors. Four agencies reported purchases under GSA's Smart Card Access Common ID contracting vehicle, and others likewise have plans to use this contract. Specifically, five agencies--the Departments of Defense, Homeland Security, the Interior, and Veterans Affairs, and the National Aeronautics and Space Administration--are planning to make an aggregated purchase of up to 40 million cards over the next 4 years using the GSA contract. Finally, nine agencies are developing and implementing integrated agencywide smart card initiatives. These projects are intended to use one card to support multiple functions, such as providing identification credentials, accessing computer systems, and storing monetary values.
|
As part of VA’s mission, VHA is to serve the needs of America’s veterans and their families (spouses and children) by providing primary care, specialized care, and related medical and social support services. VHA provides health services through more than 1,500 sites of care, including 153 hospitals, 995 outpatient clinics, 135 community living centers, and 232 Vet Centers. It employs more than 15,000 physicians and serves more than 5 million patients at these sites of care each year. To carry out its daily operations in providing health care to veterans and their families, VHA relies on an outpatient appointment scheduling system that is part of the department’s current electronic health information system, known as VistA. However, according to the department, the current scheduling system has a number of limitations that impede its effectiveness, including: Appointment activity resides at multiple medical centers, making it difficult to retrieve all of a patient’s health care history. Clinicians must maintain multiple calendars to account for the various services they provide. Appointments and ancillary services are not linked, resulting in the inability to associate medical data with appointments. Access to multiple sites is required to make appointments, resulting in inefficient coordination of care between facilities. Accordingly, in 2000, VHA initiated a project to replace the existing scheduling system. In doing so, it envisioned that the new scheduling system would provide benefits for the department, including: a single enterprise database that would allow all appointments to be viewed, regardless of the point of care; calendars that would include sequential appointment settings; long-term appointment lists that would track and remind staff of future appointments; and ancillary service links that would allow for automated updates to appointment cancellations. VA originally planned to deploy the new outpatient scheduling system to an initial site by December 2004 and nationally by June 2006. In August 2002, the department had estimated that the total cost to develop and deploy the new system across all VHA facilities would be about $59 million. VHA began the scheduling replacement initiative in October 2000, at which time it began to identify business requirements for the new system. It also issued a request for proposals, seeking interested Veterans Integrated Service Networks (VISN) to partner with its Office of Information to conduct a business process reengineering effort and replace the VistA scheduling system with a commercial off-the-shelf (COTS) application. In January 2001, VHA selected VISNs 16 and 17, representing Texas and the south-central United States, respectively, to perform these tasks. Additionally, the Muskogee, Oklahoma medical center, part of VISN 16, was the planned location for the initial deployment of the new scheduling system. The VISNs used a pre-existing Cooperative Administrative Support Units (CASU) contract to obtain the services of the Southwest Research Institute (SwRI) to support the project. The statement of work included tasks to develop business information flow models and information system technical documents, and to select a COTS product to integrate into the system. However, according to project officials, in April 2002, VHA’s Chief Information Officer (CIO) determined that using a COTS solution would result in excessive costs and make the department dependent on a vendor for a core business function. Thus, the CIO directed the VISNs to redirect their efforts and funding to develop a scheduling application instead of purchasing a COTS application. VA issued a new statement of work for SwRI to design, build, and test the scheduling application. The department planned to deliver the new outpatient scheduling system first to the location in Muskogee, referred to as the alpha deployment, by December 2004. Once successfully tested and deployed at this location, the system was to be deployed within VISN 16 and 17 for testing, then nationally to all VHA facilities. In 2004, issues integrating the application with HealtheVet components and funding reductions led to a delay in the alpha deployment date, pushing it back to October 2006. In an effort to meet the new date, VA decided in April 2005 to descope the alpha version of the scheduling application by removing certain planned capabilities. Simultaneously, the department and SwRI began treating a separate version that was to retain all planned capabilities as a distinct development effort, referred to as the beta version. Nevertheless, delays in correcting defects, conducting tests, and changing the code in response to infrastructure modifications, resulted in six more extensions of the target alpha deployment date (over 2 ½ years beyond the October 2006 planned date). Further, in an attempt to expedite the project, in September 2008, the Principal Deputy Under Secretary for Health directed the project team to focus its efforts on a national deployment of the new scheduling system by the end of 2009, rather than on the single-site alpha deployment. However, in January 2009, the project team determined that the product that had been developed for alpha deployment would not be suitable for national deployment by the end of 2009; thus, in February 2009, the department terminated its contract for the replacement scheduling application. VA subsequently ended the entire Scheduling Replacement Project in September 2009. Figure 1 depicts a timeline of key project events from its initiation through its termination. Several organizations within VA were responsible for governance of the Scheduling Replacement Project: In July 2000, VHA established a project management office to coordinate all efforts and monitor project activities to ensure success of the Scheduling Replacement Project. The project management office was to ensure achievement of milestones, evaluate project success, and report to VHA senior level executives. In June 2001, VHA established the Scheduling Replacement Board of Directors to guide the overall direction of the project. According to its charter, the board was to review project activities on a quarterly basis, provide key decisions at major project milestones, confirm the achievement of project milestones, and evaluate project success. In February 2003, VA established an Enterprise Information Board as its executive decision-making body for information technology (IT) capital planning and investment control. The board was to provide oversight in the selection, management, and control of IT investments such as the scheduling system. In February 2007, the Secretary of Veterans Affairs approved a centralized IT management structure for the department. As part of this realignment, staff from the project management office with responsibility for the Scheduling Replacement Project were transferred from VHA to the Office of Enterprise Development (OED) within VA’s Office of Information and Technology (OI&T). Also in 2007, VA issued a governance plan to enable the department to better align its IT strategy to its business strategy, manage investments, and reconcile disputes regarding IT. The governance structure established by the plan included three governance boards for IT projects, such as the Scheduling Replacement Project: The Budgeting and Near-Term Issues Board is to identify, review, recommend, and advocate projects and programs across the department. The board’s responsibilities include monitoring projects’ achievement of results. The Programming and Long-Term Issues Board is to oversee portfolio development and evaluate program execution by conducting milestone reviews and program management reviews of IT investments. The Information Technology Leadership Board is responsible for adjudicating all unresolved resource issues forwarded by the Budgeting and Near-Term Issues Board and forwarding recommendations to the department’s Strategic Management Council. We and VA’s Office of Inspector General have both issued reports concerning the HealtheVet initiative and the Scheduling Replacement Project. Specifically, in a June 2008 report, we raised concerns about VA’s HealtheVet initiative. We noted that the eight major software development projects comprising the initiative (which included the Scheduling Replacement Project) were in various stages of development, and that none had yet been completed. We noted that while VA had established interim dates for completing the component projects, it had not developed a detailed schedule or approach for completing the overall HealtheVet initiative. Further, the department had not yet implemented a complete governance structure; several key leadership positions within the development organization had not been filled or were filled with acting personnel; and the departmental governance boards had not scheduled critical reviews of HealtheVet projects. We concluded that, without all elements of governance and oversight in place, the risk to the success of the HealtheVet initiative and, therefore, its component initiatives (such as the Scheduling Replacement Project) was increased. Accordingly, we recommended that VA develop a comprehensive project management plan and schedule, as well as a governance structure, to guide the development and integration of the many projects under this complex initiative. Subsequent to our 2008 report, VA reported that it had begun to formulate a project management plan, an integrated schedule of projects, and a governance plan for the HealtheVet initiative. Further, in reporting on the development of the replacement scheduling application in August 2009, the Office of Inspector General noted, among other things, that VA did not have staff with the necessary expertise to execute large-scale IT projects. The report also noted that there was minimal oversight of the contracting processes on the project and that the department had made no attempt to find a contracting officer with experience for this multi-year, complex project. The Inspector General suggested that VA develop effective oversight processes, develop in-house staff with the expertise to manage and execute complex integrated IT programs, and expand the number of contracting officers with experience on large projects. In response, the department consolidated IT procurements under the Office of Acquisition, Logistics, and Construction and established the Technology Acquisition Center to administer future OI&T contracts. After spending an estimated $127 million over 9 years (from fiscal years 2001 through 2009) on its outpatient scheduling system project, VA has not yet implemented any of the system’s expected capabilities. According to the department, of the total amount, $62 million was expended for, among other things, project planning, management support, a development environment, and equipment. In addition, the department paid an estimated $65 million to SwRI to develop the replacement scheduling application. However, VA and SwRI were not able to resolve a large number of system defects, and the department terminated the contract in February 2009. Subsequently, the department determined that the application was not viable (i.e., did not meet its needs), and officially ended the Scheduling Replacement Project on September 30, 2009. The department began a new initiative on October 1, 2009, which it refers to as HealtheVet Scheduling. However, as of early April 2010, it had completed only limited tasks for the new initiative. Specifically, the department’s efforts consisted of analyzing alternatives and briefing VA’s CIO on the analysis. Officials told us that they had not yet developed a project plan or schedule for the initiative, but intended to do so after determining whether to build or buy the new application. The success of large IT projects is dependent on agencies’ possessing management capabilities to effectively conduct acquisitions, manage system requirements, perform system tests, measure and report project performance, and manage project risks. In addition, effective institutionalized oversight is necessary to ensure that projects are, in fact, demonstrating these management capabilities and achieving expected results. However, the Scheduling Replacement Project had weaknesses in these areas that, if not addressed, could derail the department’s current attempt to deliver a new scheduling system. The Federal Acquisition Regulation (FAR) requires preparation of acquisition plans, and our prior work evaluating major system acquisitions has found that planning is an essential activity to reduce program risk. According to the FAR, an acquisition plan must address, among other things, how competition will be sought, promoted, and sustained throughout the course of the acquisition, or cite the authority and justification for why full and open competition cannot be obtained. Competition can help save taxpayer money, improve contractor performance, and promote accountability for results. Agencies are generally required to obtain full and open competition, except in certain specified situations such as modifications within the scope of the existing contract. Orders placed against a federal supply schedule are considered to be issued using full and open competition if the applicable procedures are followed. We have also found that having a capable acquisition workforce is a necessary element of properly conducting acquisitions that will meet agency needs. VA did not develop an acquisition plan until May 2005, about 4 years after the department first contracted for a new scheduling system. Thus, formative decisions with implications for the scheduling project’s success, such as what the contractor was to do, the type of contract to be used, and how competition would be promoted and, if not, why, were made in an ad hoc fashion (i.e., not subject to a deliberative planning process). Further, VA did not promote competition in contracting for its scheduling system. Specifically, rather than performing activities that are intended to promote competition (e.g., announcing the requirement, issuing a solicitation, and evaluating proposals), VA issued task orders against an existing CASU contract that the department had in place for acquiring services such as printing, computer maintenance, and data entry. Later, when the department changed its strategy to acquire a custom-built scheduling application instead of pursuing COTS integration—a fundamental change to the development approach and contract scope—the department again did not seek to obtain the benefits of competition. Instead, the project team directed the change through a letter to the existing contractor and a substantially revised statement of work. In August 2004, VA determined that it would no longer support the CASU agreement, and in response, the project team sought to use a General Services Administration (GSA) schedule contract to retain the services of its existing contractor. However, VA did not follow required ordering procedures when it transitioned to the GSA schedule contract. Specifically, VA did not solicit price quotes from at least three schedule vendors, as required by the FAR. Instead, at the direction of the program office, the department provided a statement of work only to the incumbent contractor, which responded with a proposal and price quote. As a result, VA increased the risk that it was not selecting a contractor that would provide the best approach. Further, VA did not assess whether the purchase of commercial services under this schedule was the most suitable means for developing a custom-built scheduling application. These weaknesses in VA’s acquisition management for the scheduling system project reflect the inexperience of the department’s personnel in administering major IT contracts. In this regard, VA’s Inspector General identified the lack of VA personnel who are adequately trained and experienced to plan, award, and administer IT contracts as a major management challenge for the department and specifically cited the scheduling system acquisition as an example. Also, VA’s contracting officer told us that the contracting office did not have prior experience in the award or administration of contracts for IT system development. According to the HealtheVet Scheduling program manager, going forward, the scheduling system project team plans to use VA’s Technology Acquisition Center within the Office of Acquisition, Logistics, and Construction to administer future contracts. Established in March 2009, in an effort to improve the department’s IT acquisition management, the center is comprised of experienced acquisition staff members who are to provide exclusive contracting support to the Office of Information and Technology. According to the Executive Director, the Technology Acquisition Center includes technical specialists who can offer assistance with refining statements of work and contractual requirements. Also, representatives from the Office of General Counsel are colocated with the center to facilitate reviews for compliance with applicable federal laws and regulations. Although VA has taken positive actions to improve its IT acquisition management, these actions do not ensure that the department will not repeat the pattern of failing to seek and promote competition and other weaknesses that it demonstrated in contracting for the scheduling system. Until the department ensures that it has adequately planned for the future acquisition of a scheduling system, including whether and how it will provide for competition or otherwise comply with federal contracting requirements, it cannot ensure that it will be effective in acquiring a system that meets user needs at a reasonable cost and within a reasonable time frame. According to recognized guidance, using disciplined processes for defining and managing requirements can help reduce the risks of developing a system that does not meet user needs, cannot be adequately tested, and does not perform or function as intended. Requirements should serve as the basis for a shared understanding of the system to be developed. Among other things, effective practices for defining requirements include analyzing requirements to ensure that they are complete, verifiable, and sufficiently detailed to guide system development. In addition, maintaining bidirectional traceability from high-level operational requirements through detailed low-level requirements to test cases is an example of a disciplined requirements management practice. Further, in previous work, we have found that requirements development processes should be well-defined and documented so that they can be understood and properly implemented by those responsible for doing so. VA did not adequately analyze requirements to ensure they were complete, verifiable, and sufficiently detailed to guide system development. For example, in November 2007, VA determined that performance requirements were missing and that some requirements were not testable. Further, according to project officials, some requirements were vague and open to interpretation. For example, although the requirement to sort appointment requests to be processed was included, it required clarification on how those appointments should be sorted. Also, requirements for processing information from systems on which the scheduling application depended were missing. For example, in June 2008, several requirements for processing updates to a patient’s eligibility had to be added. The incomplete and insufficiently detailed requirements resulted in a system that did not function as intended. In addition, VA did not ensure that requirements were fully traceable. As early as October 2006, an internal review of the scheduling project’s requirements management noted that the requirements did not trace to business rules or to test cases. Yet, almost 2 years later, in August 2008, VA documentation continued to reflect this problem—stating that not every lower-level requirement traced back to one or more of the higher-level functional requirements and down to test cases. By not ensuring requirements traceability, the department increased the risk that the system could not be adequately tested and would not function as intended. According to scheduling project officials, requirements were incomplete, in part, because they depended on information from other related systems that had not yet been fully defined. In addition, VA did not develop a requirements management plan for the Scheduling Replacement Project until October 2008. Our analysis of this plan found it to be generally consistent with leading practices. However, the project team’s use of the requirements management plan was precluded by the department’s decision to end the project. According to the Scheduling program manager, the project team expects to further develop the requirements management plan, dependent upon the department’s yet-to-be-selected alternative for proceeding with the current effort, HealtheVet Scheduling. Nevertheless, the department has not yet demonstrated its capability to execute effective requirements management practices. Without well-defined and managed requirements, VA and its contractor lacked a common understanding of the system to be developed and increased the risk that the system would not perform as intended. Going forward, effective requirements development and management will be essential to ensuring that this risky situation, which could endanger the success of VA’s new scheduling system project, is not repeated. Best practices in system testing indicate that testing activities should be performed incrementally, so that problems and defects with software versions can be discovered and corrected early, when fixes generally require less time and fewer resources. VA’s guidance on conducting tests during IT development projects is consistent with these practices and specifies four test stages and associated criteria that are to be fulfilled in order to progress through the stages. For example, defects categorized as critical, major, and average severity that are identified in testing stage one (performed within the development team) are to be resolved before testing in stage two (performed by the testing services organization) is begun. Nonetheless, VA took a high-risk approach to testing the scheduling system by performing tests concurrently rather than incrementally. Based on information provided by project officials, the department began stage two testing on all 12 versions of the scheduling application before stage one testing had been completed. On average, stage two testing began 78 days before stage one testing of the same version had been completed. In two of these cases, stage two testing started before stage one testing had begun. Compounding the risk inherent in this concurrent approach to testing, the first alpha version to undergo stage two testing had 370 defects that were of critical, major, or average severity even though the department’s criteria for starting stage two testing specified that all such defects are to be resolved before starting stage two testing. While stage two testing was ongoing, VA made efforts to reduce the number of defects by issuing additional task orders for defect repair to its contractor and by hiring an additional contractor whose role was to assist in defect resolution. However, almost 2 years after beginning stage two testing, 87 defects that should have been resolved before stage two testing began had not been fixed. Scheduling project officials told us that they ignored their own testing guidance and performed concurrent testing at the direction of Office of Enterprise Development senior management in an effort to prevent project timelines from slipping. In addition, project officials told us they made a conscious decision to conduct concurrent testing in an effort to promote early identification of software defects. However, because the department did not follow its guidance for system testing and, instead, performed concurrent testing, it increased the risk that the scheduling project would not perform as intended and would require additional time and resources to be delivered. If VA is to be successful in its new initiative to provide an outpatient scheduling system, it is critical that the department adhere to its own testing guidance for ensuring the resolution of problems in a timely and cost-effective manner. Not doing so lessens the usefulness of results from its testing activities and increases the risk of additional system development failures. Office of Management and Budget (OMB) and VA policies require major projects to use earned value management (EVM) to measure and report progress. EVM is a tool for measuring project progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Such a comparison permits actual performance to be evaluated, based on variances from the cost and schedule baselines. Identification and reporting of variances and analysis of their causes help program managers determine the need for corrective actions. In addition, the cost performance index (CPI) and schedule performance index (SPI) are indicators of whether work is being performed more or less efficiently than planned. Like the variances, reporting of CPI and SPI can provide early warning of potential problems that need correcting to avoid adverse results. For a complete view of program status and an indication of where problems exist, performance data should be reported for both current (generally the most recent month) and cumulative periods. In addition, federal policy requires that systems used to collect and process EVM data be compliant with the industry standard developed by the American National Standards Institute (ANSI) and Electronic Industries Alliance (EIA), ANSI/EIA Standard 748. Such compliance is necessary to demonstrate the capability to provide reliable cost and schedule information for earned value reporting. Although VA submitted monthly reports to the department’s CIO based on earned value data for the scheduling project, the reliability of the data on which the reports were based was questionable and the reports included data that provided inconsistent views of project performance. Specifically regarding data reliability, department officials did not ensure that the EVM reporting systems for the scheduling project had been certified for compliance with ANSI/EIA Standard 748. According to the former program manager, the department did not seek to determine whether its development contractor’s system was compliant because SwRI entered cost and schedule data directly into the department’s EVM system. Although department officials asserted that this EVM system was compliant with ANSI/EIA Standard 748, the department could not provide documentation of such compliance. Because VA had not demonstrated compliance with the standard, it could not ensure that the data resulting from its EVM system and used for progress reporting were reliable. Regarding EVM reporting, in January 2006, the scheduling project management office began providing monthly reports to the department’s CIO that were based on EVM data. However, in addition to being based on data from EVM systems that had not been assessed for compliance with the applicable standard, the progress reports also included contradictory information about project performance. Specifically, the reports featured stoplight (i.e., green for “in control,” yellow for “caution,” or red for “out of control”) indicators, based on the cumulative CPI and SPI. These indicators frequently provided a view of project performance that was inconsistent with the reports’ narrative comments. For example, the September 2006 report identified cost and schedule performance as green, even though supporting narrative comments stated that the project schedule was to be extended by 9 months due to a delay in performing testing and the need for additional time to repair system defects. The June 2007 report also identified project cost and schedule performance as green, despite the report noting that the project budget was being increased by $3 million so that the development contract could be extended to accommodate schedule delays. Further, the December 2007 report identified cost and schedule performance as green, while at the same time stating that the development contract was to be extended again and that a cost variance would be reported in the near future. This pattern of inconsistent progress reporting continued until October 2008, when the report for that month and all others through August 2009 showed cost and schedule performance as red, which was consistent with the actual state of the project. In discussing this matter, the former program manager stated that the Scheduling Replacement Project complied with the department’s EVM policies, but noted that the department performed EVM for the scheduling project only to fulfill the OMB requirement and that the data were not used as the basis for decision making because doing so was not a part of the department’s culture. Because VA’s scheduling project was not managed in accordance with EVM methods that could provide a widely recognized means of reliably determining and reporting cost and schedule performance, the department was not positioned to detect performance shortfalls and initiate timely corrective actions that might have prevented the project’s failure. Having EVM reporting that provides a reliable measure of progress will be essential as the department moves forward with its new scheduling project. Managing project risks means proactively identifying circumstances that increase the probability of failure to meet commitments and taking steps to prevent them from occurring. Federal guidance and best practices advocate risk management. To be effective, risk management activities should include identifying and prioritizing risks as to their probability of occurrence and impact, documenting them in an inventory, and developing and implementing appropriate risk mitigation strategies. By performing these activities, potential problems can be avoided before they become actual cost, schedule, and performance shortfalls. VA established a process for managing the scheduling project’s risks that was consistent with relevant best practices. Specifically, project officials developed a risk management plan for managing risks to the scheduling project. The plan defined five phases of the risk management process— risk identification, risk analysis, risk response planning, risk monitoring and control, and risk review. The plan also defined risk-related roles and responsibilities for the scheduling project staff and tools to be used to capture identified risks, track their status, and communicate them. In addition, project officials captured identified risks to the scheduling project in an automated tracking tool. Examples of risks identified in the tool included the risk that hardware at sites where the system was to be deployed was incompatible with the new application and another related to SwRI’s failure to meet deliverable dates. However, while the department had established a process for managing risks to the scheduling project, it did not have a comprehensive list of risks because it did not take key project risks into account. As previously discussed, we identified problems in VA’s approach to managing the project in four major areas—acquisition management, requirements management, system testing, and earned value management. Nevertheless, VA did not identify as risks its weaknesses in the following three project management practices: (1) using a noncompetitive acquisition approach, (2) conducting concurrent testing and initiation of stage two testing with significant defects, and (3) reporting unreliable project cost and schedule performance information. Any one of these risks alone had the potential to adversely impact the outcome of the project. The three of them together dramatically increased the likelihood that the project would not succeed. Since these project management weaknesses were not identified as risks, VA was unable to estimate the significance of their occurrence and decide what steps should be taken to best manage them. Senior project officials indicated that staff members were often reluctant to raise risks or issues to leadership in the Office of Enterprise Development due to the emphasis on keeping the project on schedule. Further, the scheduling program manager recognized that the project management office was inadequately staffed to implement a disciplined risk management process and stated that, in September 2008, a full-time risk manager was added to the staff. As VA continues with its latest scheduling effort, it will be critical that the department identify a comprehensive list of risks so that threats to the project can be detected and mitigated in a timely manner. GAO and OMB guidance call for the use of institutional management processes to control and oversee IT investments. Critical to these processes are activities to track progress of IT projects, such as milestone reviews that include mechanisms to identify underperforming projects, so that timely steps can be taken to address deficiencies. These reviews should track project performance and progress toward predefined cost and schedule goals, as well as monitor project benefits and exposure to risks. Moreover, these activities should be conducted by a department- level investment review board (or comparable entity) composed of senior executives from the IT office and business units with appropriate authority to address issues when projects are not meeting cost, schedule, and performance goals. VA’s Enterprise Information Board was established in February 2003 to provide oversight of IT projects through in-process reviews when projects experience problems or variances outside of tolerance levels. Similarly, the Programming and Long-Term Issues Board, established in June 2007 as a result of the IT realignment, is responsible for performing milestone reviews and program management reviews of projects. However, between June 2006 and May 2008, the department did not provide oversight of the Scheduling Replacement Project, even though the department had become aware of significant issues indicating that the project was having difficulty meeting its schedule and performance goals. Specifically, in June 2006, the project team found that a delivery of software from SwRI included over 350 defects, leading the office to delay the system deployment by 9 months, from October 2006 to July 2007, to mitigate the defects. A May 2007 report from an independent contractor stated that VA’s project management team did not have a clear understanding of the status of the project in terms of progress being made on those defects. Further, a July 2007 review by the Software Engineering Institute found that a test environment had not been developed and that the schedule for testing did not include sufficient time to identify and correct all infrastructure issues. Based on the results of these reviews, the project management office recommended the project be stopped and reevaluated before moving forward. Despite indications of problems with the project, neither the Enterprise Information Board nor the Programming and Long-Term Issues Board conducted reviews between June 2006 and May 2008 that could have identified corrective actions for the Scheduling Replacement Project. In June 2008, the Director of the Office of Enterprise Development requested an operational test readiness review of the replacement scheduling application by the Programming and Long-Term Issues Board to determine if the application was ready for deployment. That review identified issues, including significant critical defects in the application and a lapse in a contract to resolve defects. According to the chairman of the Programming and Long-Term Issues Board, it did not conduct reviews of the scheduling project prior to June 2008 because it was focused on developing the department’s IT budget strategy. In June 2009, VA’s Assistant Secretary for Information and Technology, who serves as the department’s CIO, began establishing a new process for planning and managing its IT projects—the Program Management Accountability System (PMAS). According to the CIO, this process is intended to promote near-term visibility into troubled programs, allowing the department to take corrective actions earlier and avoid long-term project failures. PMAS is expected to improve oversight of IT projects through strict adherence to project milestones and imposing strong corrective measures if a project misses multiple milestones. According to the CIO, under PMAS, projects will be expected to deliver smaller, more frequent releases of new functionality to customers. In addition, specific program resources and documentation are to be in place before development begins, and approved processes are to be used during the system development life cycle. This approach is intended to ensure that customers, project members, and vendors working on a project are aligned, accountable, and have access to the resources necessary to succeed before work begins. For a program to be approved for investment under PMAS, the program must have, among other things, an established customer sponsor, a qualified incremental program plan, requirements for three delivery milestones, and documented success criteria. According to the HealtheVet Scheduling program manager, the department expects to develop plans for the new scheduling initiative, required under PMAS, once a strategy for the initiative is selected. However, the department has not yet demonstrated that it can sustain the wholesale change in management of IT projects that PMAS represents or that this new approach will be sufficiently robust to prevent or correct weaknesses such as those that contributed to the Scheduling Replacement Project’s failure. Until the department has fully established and effectively implemented the project management controls that are expected to be a component of PMAS, it remains to be seen whether this new approach will be effective in providing oversight to ensure the success of the department’s new scheduling effort. While the Scheduling Replacement Project was one of many components of VA’s HealtheVet initiative, the impact of the project’s termination on the initiative is currently unclear. The impact is unclear because the relationships (i.e., interdependencies) among the various projects under HealtheVet have not been determined. As described in VA’s budget submission for 2011, HealtheVet is the most critical IT development program for medical care, and is expected to enhance and supplement the legacy VistA system using highly integrated health care applications, such as the capability to schedule outpatient appointments. However, the department’s efforts have not yet resulted in a finalized plan that outlines what needs to be done and when. As of March 2010, the department had not completed its comprehensive plan and integrated schedule to guide the development and integration of the many projects that make up this departmentwide initiative. According to officials in VA’s Office of Information and Technology, the department plans to document the interdependencies, project milestones, and deliverables in an integrated master schedule as part of a project management plan that is expected to be completed by June 2010. In the absence of an overall comprehensive plan for HealtheVet that incorporates critical areas of system development and considers all dependencies and subtasks and that can be used as a means of determining progress, it is difficult to determine how scheduling and other applications will be integrated into this larger HealtheVet system. Likewise, without such a plan, the impact of the terminated scheduling project on the completion of the HealtheVet initiative cannot be determined. After almost a decade of effort, VA has not accomplished what it set out to achieve in replacing its patient scheduling system. A broad range of managerial weaknesses plagued the project from beginning to end and increased the project’s risk of failure. Specifically, because the department did not develop and execute an acquisition plan, its acquisition activities were ad hoc and it did not seek to obtain the benefits of competition. Additionally, in defining and managing system requirements, the department did not perform critically important activities such as ensuring that the requirements were complete and sufficiently detailed. Further, the department’s decision to concurrently conduct tests contributed to an increased risk that the application would not perform as intended, and its earned value management data did not serve as a reliable indicator of project performance. Moreover, even though the department had a plan and process for managing project risks, it did not identify key risks mentioned or take steps to mitigate them. Finally, although the department was aware of major issues with the project through several external reviews, the lack of effective institutional oversight allowed the project to continue unchecked and, ultimately, to fail. Given this situation, the department is starting over and is in the process of analyzing alternative strategies, which will be the basis for a project plan that is to be developed. At the same time, the department is instituting a new approach that is intended to manage and control IT system projects and avoid project failures, such as what has occurred with the Scheduling Replacement Project. Finally, while the scheduling system project was to result in the first component of VA’s larger HealtheVet initiative to modernize the department’s health information system, the specific impact of the project’s failure on this initiative is unclear because HealtheVet plans have not been completed. Until the department effectively implements measures that prevent the types of management weaknesses that plagued its earlier efforts, it risks incurring similar weaknesses in its latest scheduling replacement effort, which could again prevent VA from delivering this important capability for serving the health care needs of veterans and their families. To enhance VA’s effort to successfully fulfill its forthcoming plans for the outpatient scheduling system replacement project and the HealtheVet program, we recommend that the Secretary of Veterans Affairs direct the CIO to make certain the following six actions are taken: Ensure acquisition plans document how competition will be sought, promoted, and sustained or identify the basis of authority for not using full and open competition. Ensure implementation of a requirements management plan that reflects leading practices for requirements development and management. Specifically, implementation of the plan should include analyzing requirements to ensure they are complete, verifiable, and sufficiently detailed to guide development, and maintaining requirements traceability from high-level operational requirements through detailed low-level requirements to test cases. Adhere to the department’s guidance for system testing including (1) performing testing incrementally and (2) resolving defects of average and above severity prior to proceeding to subsequent stages of testing. Ensure effective implementation of EVM by making certain that the: (1) EVM reporting systems for the scheduling project are certified for compliance with ANSI/EIA Standard 748 and data resulting from the systems are reliable; (2) project status reports based on EVM data are reliable in their portrayal of the project’s cumulative and current cost and schedule performance; and (3) officials responsible for managing and overseeing the project use earned value data as an input to their decision- making processes. Identify risks related to the scheduling project moving forward and prepare plans and strategies to mitigate them. Ensure that the policies and procedures VA is establishing to provide meaningful program oversight are effectively executed and that they include (1) robust collection methods for information on project costs, benefits, schedule, risk assessments, performance metrics, and system functionality to support executive decision making; (2) the establishment of reporting mechanisms to provide this information in a timely manner to department IT oversight control boards; and (3) defined criteria and documented policies on actions the department will take when development deficiencies for a project are identified. The VA Chief of Staff provided written comments on a draft of this report. In its comments, the department generally agreed with our conclusions, concurred with five of our six recommendations, and described actions to address them. For example, the department stated that it will work closely with contracting officers to ensure future acquisition plans clearly identify an acquisition strategy that promotes full and open competition. In addition, the department stated that its new IT project management approach, PMAS, will provide near-term visibility into troubled programs, allowing the Principal Deputy Assistant Secretary for Information and Technology to provide help earlier and avoid long-term project failures. The department concurred in principle with one of our recommendations: that it ensure effective implementation of EVM. In this regard, the department noted that PMAS requires monthly analysis and reporting of project performance, in addition to VA’s project status reporting to OMB and the public. However, the department did not describe its actions to ensure the reliability of project performance data and reports, nor did it explain how it would ensure the use of reliable performance data in managing and overseeing the project under PMAS. Unless the department fully addresses this recommendation, VA may not be positioned to reliably detect performance shortfalls and initiate timely corrective actions that could prevent future project failure. The department also provided technical comments, which we have incorporated in the report as appropriate. The department’s written comments are reproduced in appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our study were to (1) determine the status of the Scheduling Replacement Project, (2) determine the effectiveness of the Department of Veterans Affairs (VA) management and oversight of the project, and (3) assess the impact of the project on VA’s overall implementation of its health information system modernization initiative— HealtheVet. To determine the status of the Scheduling Replacement Project, we reviewed status briefings on VA’s assessment of alternatives for its new scheduling initiative, as well as the department’s fiscal year 2011 budget submission. We supplemented these reviews with interviews with the scheduling program manager, the Director of the Office of Enterprise Development, and the Veterans Health Administration Enterprise Systems Manager for the project. To determine the effectiveness of the department’s management and oversight of the project, we evaluated its acquisition management, system requirements management, system test management, use of earned value management, management and mitigation of risks, and project oversight and governance processes. To evaluate VA’s approach to contracting for the scheduling system, we reviewed and analyzed program documentation, including the Scheduling Replacement Project acquisition plans, contract task orders, statements of work, sole source justifications, and a contracting white paper to determine the extent to which the agency’s practices were consistent with relevant planning and competition requirements in the Federal Acquisition Regulation. Regarding system requirements management, we compared project requirements management practices described in system requirements documents such as the software requirements specification and project status briefings to recognized requirements management guidance, such as those included in the Software Engineering Institute’s Capability Maturity Model Integration. We also assessed the scheduling project requirements management plan and examined the degree to which it was consistent with leading requirements management practices such as the Software Engineering Institute’s Capability Maturity Model Integration. To determine the effectiveness of VA’s test management, we reviewed the department’s guidance for performing system tests and compared project testing activities to this guidance and associated best practices. Specifically, we reviewed documentation of test results to determine the dates testing occurred and the number and severity of defects identified. To review VA’s use of earned value management (EVM) to assess and report project performance, we reviewed Office of Management and Budget Memorandum M-05-23, as well as VA standard operating procedures related to EVM to identify requirements for effective execution of this discipline in assessing project performance. We compared the Scheduling Replacement Project’s approach to EVM with recognized practices as described in GAO’s Cost Estimating and Assessment Guide, such as the American National Standards Institute (ANSI) and Electronic Industries Alliance (EIA), ANSI/EIA Standard 748. We reviewed scheduling project reports on earned value performance that were provided to management to determine the level to which these reports provided complete and meaningful cost and schedule performance trends to department management. To determine the effectiveness of the management and mitigation of scheduling project risks, we consulted industry guidance on risk mitigation and management, including Software Engineering Institute’s Capability Maturity Model Integration. In addition, we reviewed the scheduling project’s risk management plan and process, including the Scheduling Replacement Project Risk Management plan, and determined the level to which the department’s plans and processes met industry best practices and were executed to identify risks. Further, we also examined the department’s risk inventory to determine whether project risks we found during our review had been identified and considered by VA. To assess the effectiveness of scheduling project oversight and governance, we reviewed GAO guidance on effective project oversight, including our Information Technology Investment Management Framework; analyzed documentation from department oversight entities that existed over the course of the project, including the Enterprise Information Board and the Programming and Long-Term Issues Board; and determined the extent to which these bodies performed effective oversight of the project. In addition to the actions just described, we supplemented our analysis by interviewing cognizant VA and contractor officials including the VA Chief Information Officer, current and former program managers, project team members, representatives from the Veterans Health Administration, the department’s contracting officer for the project, and the Director of the Office of Enterprise Development. To assess the impact of the scheduling project on VA’s overall implementation of its health information system modernization initiative, we reviewed documentation such as briefings from HealtheVet planning meetings and interviewed cognizant officials, including the Medical Care Program Executive Officer in the Office of Information and Technology and the Director of Health Information Systems in the Office of Enterprise Development, about the status of the HealtheVet initiative. We conducted this performance audit at VA headquarters in Washington, D.C., from May 2009 through May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributions to this report were made by Mark T. Bird, Assistant Director; Carol Cha; Shaun Byrnes; Neil Doherty; Rebecca Eyler; Michele Mackin; Lee McCracken; Constantine J. Papanastasiou; Michael W. Redfern; J. Michael Resser; Sylvia Shanks; Kelly Shaw; Eric Trout; Adam Vodraska; and Merry Woo.
|
The Department of Veterans Affairs (VA) provides medical care, disability compensation, and vocational rehabilitation to veterans. The Veterans Health Administration (VHA)--a component of VA--provides care to over 5 million patients in more than 1,500 facilities. VHA relies on an outpatient scheduling system that is over 25 years old. In 2000, VHA began the Scheduling Replacement Project to modernize this system as part of a larger departmentwide modernization effort called HealtheVet. However, in February 2009, VA terminated a key contract supporting the project. GAO was asked to (1) determine the status of the Scheduling Replacement Project, (2) determine the effectiveness of VA's management and oversight of the project, and (3) assess the impact of the project on VA's overall implementation of its HealtheVet initiative. To do so, GAO reviewed project documentation and interviewed VA and contractor officials. After spending an estimated $127 million over 9 years on its outpatient scheduling system project, VA has not implemented any of the planned system's capabilities and is essentially starting over. Of the total amount, $62 million was expended for, among other things, project planning, management support, a development environment, and equipment. In addition, the department paid an estimated $65 million to the contractor selected to develop the replacement scheduling application. However, the application software had a large number of defects that VA and the contractor could not resolve. As a result, the department terminated the contract, determined that the system could not be deployed, and officially ended the Scheduling Replacement Project on September 30, 2009. VA began a new initiative that it refers to as HealtheVet Scheduling on October 1, 2009. As of April 2010, the department's efforts on this new initiative had largely consisted of evaluating whether to buy or custom build a new scheduling application. VA's efforts to successfully complete the Scheduling Replacement Project were hindered by weaknesses in several key project management disciplines and a lack of effective oversight that, if not addressed, could undermine the department's second effort to replace its scheduling system: (1) VA did not adequately plan its acquisition of the scheduling application and did not obtain the benefits of competition. (2) VA did not ensure requirements were complete and sufficiently detailed to guide development of the scheduling system. (3) VA performed system tests concurrently, increasing the risk that the system would not perform as intended, and did not always follow its own guidance, leading to software passing through the testing process with unaddressed critical defects. (4) VA's project progress and status reports were not reliable, and included data that provided inconsistent views of project performance. (5) VA did not effectively identify, mitigate, and communicate project risks due to, among other things, staff members' reluctance to raise issues to the department's leadership. (6) VA's various oversight boards had responsibility for overseeing the Scheduling Replacement Project; however, they did not take corrective actions despite the department becoming aware of significant issues. The impact of the scheduling project on the HealtheVet initiative cannot yet be determined because VA has not developed a comprehensive plan for HealtheVet that, among other things, documents the dependencies among the projects that comprise the initiative. VA officials stated that the department plans to document the interdependencies, project milestones, and deliverables in an integrated master schedule as part of a project management plan that is expected to be completed by June 2010. In the absence of such a plan, the impact of the scheduling project's failure on the HealtheVet program is uncertain. Secretary of Veterans Affairs direct the Chief Information Officer to take six actions to improve key processes, including acquisition management, system testing, and progress reporting, which are essential to the department's second outpatient scheduling system effort. In written comments on a draft of this report, VA generally concurred with GAO's recommendations and described actions to address them.
|
Major capital investment in highways, public transportation systems, waterways, and airports are currently funded, in part, through various taxes and fees on users, such as fuel taxes or sales taxes; landing fees and docking fees; and tolls on certain roads, tunnels, and bridges. However, these revenue-raising instruments do not always provide strong incentives for efficient use of transportation infrastructure. For example, the tax rates on gasoline, which are the same regardless of whether vehicles are traveling during congested or uncongested periods, provide no incentive for travelers to use the infrastructure more efficiently. Similarly, landing fees at airports that are based on aircraft weight help create incentives for airlines to shift to smaller, lighter aircraft providing more frequent service, which results in increased demand for runways at peak times and therefore increased congestion. Due in part to increasing volumes of traffic, as well as these built-in disincentives to the efficient use of the transportation infrastructure, congestion on our nation’s highways, airways, and waterways remains a national problem. On already crowded roadways, passenger vehicle travel is expected to grow by almost 25 percent this decade, and freight movement by trucks may grow by a similar amount. In the nation’s air transportation system, before the terrorist attacks on September 11, 2001, an unprecedented number of delays in commercial airline flights occurred—a substantial part of which were due to airport and airspace congestion, particularly during peak morning and evening hours. At 31 of the nation’s busiest airports, 28 percent of the domestic flights arrived late in 2000. While flight congestion declined significantly with reduced traffic after the attacks, a more robust economy and less public apprehension about flying will likely lead to renewed demands on the air transport system. At locks on our inland waterways and at major seaports, congestion has also been growing. For example, the U.S. Army Corps of Engineers estimated that 15 key locks would exceed 80 percent of their capacity by 2020 as a result of the expected growth in freight travel, as compared to 4 locks that reached that level in 1999, resulting in significantly increased delay. Numerous methods can be used to address congestion, including building new infrastructure, improving maintenance and operation of infrastructure, and using the existing infrastructure more efficiently through demand management strategies, including pricing mechanisms. Experts with whom we talked said that consideration of a full range of these methods is likely necessary to ease our nation’s transportation congestion. In theory, congestion pricing, as one of these methods, is useful for mitigating the delay costs of congestion. If highway, aviation, and waterway users were charged extra for peak-hour use, some would shift to less busy times, or make other adjustments, thereby alleviating delay at the peak periods. Many other areas of the economy frequently use peak-period pricing mechanisms when demand varies considerably by time of day or season. Electricity providers, for example, often charge higher prices at peak periods and lower prices when demand is reduced. Other industries with common peak-pricing practices include telecommunications, airlines, and hotels and resorts. In addition, Amtrak and some transit systems use peak- period pricing. In theory, using congestion pricing has the potential to enhance economic efficiency, as well as provide other benefits, such as providing market signals that can guide capital investment decisions, and generating revenue to help fund such investment directly from users of the system. There are several approaches to implementing congestion pricing on roads and at airports. However, incorporating pricing into our transportation systems involves overcoming several implementation challenges, such as current restrictions on using congestion pricing on our highways and on runways, and equity and fairness concerns. Economists generally believe that charging automobile, truck, vessel, and aircraft operators surcharges or tolls during congested periods can enhance economic efficiency by making them take into account the external costs they impose on others in deciding when, where, and how to travel. In congested situations, external costs are substantial and include increased travel time, pollution, and noise. The goal of efficient pricing on public roads, for example, would be to set tolls for travel during congested periods that would make the price (including the toll) that a driver pays for such a trip equal or close to the total cost of that trip, including external costs. In theory, these surcharges could help reduce congestion and the demand for road space at peak periods by providing incentives for travelers to share rides, use transit, and travel at less congested (generally off-peak) times or on less congested routes. Peak-period pricing may have applicability to other modes as well. For example, congestion pricing for using locks on our nation’s inland waterways might be a way to reduce delays experienced by barge operators. Similarly, congestion pricing at commercial airports—that is, charging higher landing fees during congested periods—would cause aircraft operators, both airlines and general aviation operators, to consider external costs in making their decisions. As a result, there would be incentives to shift some operations to off-peak hours or secondary airports or to provide the same carrying capacity by operating fewer but larger aircraft. In addition to increasing the efficiency with which current transportation infrastructure is used, congestion charges may be helpful in guiding capital investment decisions for new facilities. As congestion increases, the delay cost that an additional user of the system causes for other users also increases. If congestion charges are set such that they reflect external costs, then as congestion increases, congestion surcharges will increase. Rising surcharges provide signals of increased demand for specific increases in physical capacity, indicating where capital investment decisions to increase capacity would be most valuable. At the same time, congestion charges will provide a ready source of revenue for local, state, and federal governments and transportation facility operators to fund these investments in new capacity that, in turn, can reduce delays. In some cases and over a longer period, in places where external costs are substantial, and congestion surcharges are relatively high, this form of pricing might influence land-use plans and the prevalence of telecommuting and flexible workplaces. Congestion pricing could be applied to transportation systems in a variety of ways, and there are several possible approaches related to which facilities are priced, how the price is set, and how the toll is collected. In one possible form of congestion pricing for public roads, tolls would be set on an entire roadway or road segment during periods of peak use. In another form, sometimes known as value pricing, peak-period tolls would be set on only some lanes of a roadway, allowing drivers to choose between faster tolled lanes and slower non-tolled lanes. High-occupancy toll (HOT) lanes, under which drivers of single-occupancy vehicles are given the option of paying a toll to use lanes that are otherwise restricted to high-occupancy vehicles, are an example of value pricing. Fast and Intertwined Regular (FAIR) lanes is a recent proposal that is another variation of value pricing. Under the FAIR lanes approach, revenues generated from travelers using electronically tolled lanes would be transferred to travelers using adjacent non-tolled lanes on the same roadway. These transfers would be done through electronic transponders in the vehicles using the toll lanes, as well as the non-tolled lanes. Those in the non-tolled lanes would receive a credit equal to 25 to 50 percent of the current effective toll, which could then be used toward public transportation fares or toward the use of the toll lanes on another day. In this way, drivers in the non-tolled lanes would receive compensation for the additional congestion that may result from increased use of those lanes once tolls are placed on other lanes. In a third form of congestion pricing for public roads, known as cordon-based pricing, drivers would be charged a fee for entering a specific area of a city, such as a central business district, at peak hours. Two commonly mentioned methods of applying the concept of congestion pricing at airports are differential pricing and auctions. Under differential pricing, airports would set landing fees higher at times when demand for takeoff and landing slots exceeded their availability, and lower at other times, in effect applying a surcharge for using the system at peak-demand periods. An auction approach would allow airports to periodically auction a fixed number of takeoff and landing slots—equal to the airport’s capacity—to the highest bidders. For example, an airport, in conjunction with the Federal Aviation Administration, could determine its per-quarter- hour takeoff and landing capacity, and a competitive bidding process among carriers could determine fees during each period, which would also result in surcharges for using the system at peak-demand periods. Congestion pricing tolls could be levied using either a predetermined or variable approach. Under the predetermined approach, drivers would pay tolls that are preset and fixed according to the time of day they travel. In contrast, under the variable approach, drivers would pay tolls that vary according to the level of congestion on an affected roadway. For either approach, the amount of the toll to be levied would likely be set by state or local officials, or other toll facility operators, based on information from roadway usage and traveler surveys. The toll structure may also be influenced by the judgment of the toll facility operators. These tolls could then be adjusted upward or downward based on the use of the toll facility in relation to the optimal flow of traffic on the facility. Electronic methods of collection from users of public roads offer vast increases in efficiency compared to traditional tollbooths, which are labor intensive and relatively expensive to operate, and create congestion as drivers line up to pay their tolls. And, over the past decade, electronic road pricing technology has become more reliable and, as a result, more widely adopted on many toll facilities. According to a report issued by the Transportation Research Board, technologies that are currently used at some toll facilities to automatically charge users could also be used to electronically collect congestion surcharges without establishing additional tollbooths that would cause delays. In application of cordon- based pricing, drivers would typically purchase and display permits that allow them access to the cordoned section of the city before entering. Daily or monthly permits could be differentiated by color and shape for easy enforcement. One challenge in implementing congestion pricing for transportation systems is that, at present, greater use of pricing is limited by statutory restrictions. For example, tolls are prohibited on the Interstate Highway System, except for roads that already had tolls in place before they were incorporated into the system (e.g., the New Jersey and Pennsylvania Turnpikes) or where exceptions have been made for the implementation of pilot projects. Also, there are a variety of statutory restrictions on landing fees at airports that can limit use of congestion pricing. Landing fees are typically based on aircraft weight and are required to be set at levels designed to recover the historical costs of providing landing services. Costs imposed by congestion and other externalities cannot be considered in the calculation of the cost base and, hence, cannot be recovered in landing fees. Congestion fees, as well as most other types of fees, are also prohibited on the inland waterways because of the Interstate Commerce clause, according to the Army Corps of Engineers. Therefore, addressing some of these restrictions would be necessary to make greater use of congestion pricing. Another challenge involves effectively addressing concerns raised about equity and fairness. Because of this issue, political opposition to using this approach to address mobility challenges has been substantial. One equity concern that has frequently been raised about congestion pricing of public roads has been the potential effects of surcharges or tolls on lower-income drivers. Because a surcharge would represent a higher portion of the earnings of lower-income households, it imposes a greater financial burden on them and, therefore, is considered unfair. The economics literature suggests that these concerns can be mitigated to some degree. For example, proponents of congestion pricing have noted that all income groups could potentially benefit if there is an appropriate distribution of the revenues obtained through congestion pricing. These revenues could be used to build new road capacity, given back as tax rebates tilted toward lower-income households, or used in some other way so that, in theory, the net benefits for each income group would exceed its costs. Although equity considerations could potentially be addressed by constructing a congestion pricing system for roads so that all income groups received net benefits, there could still be individuals who would be negatively affected. In theory, the cost of a surcharge or toll would be less for those who could more readily make adjustments to their driving behavior that would allow them to avoid paying the toll. Conversely, drivers who had little flexibility to alter their work schedules to avoid a toll by traveling at off-peak hours could potentially be more affected than workers with such flexibility. Similarly, those whose commuting patterns make it harder for them to form carpools or use transit could also be more affected. The arbitrary nature of these distinctions, as well as opposition from those who find the concept of restricting lanes or roads to people who pay to use them to be elitist, raises fairness concerns and accounts for some of the political opposition to congestion pricing. More generally, there is often opposition to paying a charge to use something that was formerly provided “free.” A number of existing congestion-pricing transportation projects, both here and abroad, show that pricing can influence travelers’ behavior to the point of reducing congestion and thus increasing economic efficiency. For example, value pricing pilot projects in the United States show considerable usage and have provided users with a less congested alternative, thus improving traffic flows and reducing delays. In addition, congestion-pricing mechanisms, in general, have demonstrated that they can generate revenue sufficient to fund their operation and, in some cases, fund investment in transportation alternatives. The available evidence also suggests that implementation challenges can be mitigated, although to what extent is not yet clear. A number of the congestion-pricing projects we identified enhanced transportation mobility through improved traffic flows, increased speeds and reduced delays for some users. One way in which some projects have done so is by channeling some drivers into infrastructure that is not being fully utilized even at peak periods. In several locations in the United States, for example, HOT lane projects have been implemented in which vehicles with fewer passengers than would normally be needed to use high occupancy vehicle lanes have been allowed to use such lanes by paying a toll. High occupancy vehicle lanes are generally less congested than other highway lanes, and drivers who use them are thus able to shorten their trip times. The toll for such use varies, increasing during periods of peak congestion. In such HOT lane or value pricing projects in Orange County (as shown in figure 1) and San Diego, California, and Houston, Texas, drivers willing to pay to use the HOT lane saved an average of 12-20 minutes per trip in the peak period. In addition, some projects were able to shift demand on congested infrastructure to less congested time periods. In San Diego, officials were also able to spread out peak period traffic on the toll lanes over a longer period of time by charging a lower toll just before and just after the peak period. In many instances, however, a congested transportation system may have no equivalent to a high occupancy vehicle lane with additional capacity. In these cases, some other congestion pricing models have been used to encourage travelers to shift their behavior, either by traveling at another time or by using alternative transportation modes, such as buses, trains, or carpools. For example, in Singapore, London, and Norway, congestion pricing has taken the form of cordon-based pricing, where drivers pay to enter entire regions. These projects have demonstrated significant decreases in the level of congestion on roads in the cordoned area and some significant shifts to other alternative modes, as follows: In Singapore, the city government instituted a $1 charge in 1975 for private vehicles to enter the central business district in the morning rush hours. Carpools, buses, motorcycles, and freight vehicles were exempted from the charge. The result was an immediate 73 percent decline in the use of private cars, a 30 percent increase in carpools, and a doubling of buses’ share of work traffic. In London, recent implementation of cordon tolls resulted in traffic decreases of roughly 20 percent, and about a 14 percent increase in the use of buses during the morning commute. In Trondheim, Norway, cordon tolls produced a 10 percent reduction in traffic at peak times and an 8 percent increase in traffic in off-peak times in the central business district. Such projects have similarly been used to relieve congestion at crowded airports. In one case, the Port Authority of New York and New Jersey imposed surcharges beginning in 1968 for peak-hour use by small aircraft at Newark, Kennedy, and La Guardia airports. These small aircraft, known as “general aviation” aircraft, were not part of scheduled airline operations. The need to accommodate takeoffs and landings for these aircraft during peak periods was adding to passsengers’ delays on scheduled airline flights. The port authority raised the peak-period minimum take-off and landing fees for aircraft with fewer than 25 seats from $5 to $25, while keeping the off-peak fee at $5. As a result of the surcharges, general aviation activity during peak periods decreased by 30 percent. The percentage of aircraft operations delayed more than 30 minutes declined markedly over the same period. Similarly, in 1988 at Boston’s Logan Airport, the Massachusetts Port Authority adopted a much higher landing fee for smaller aircraft. Like the three New York and New Jersey airports, Logan experienced a large drop-off in use by smaller aircraft. Much of the general aviation abandoned Logan for secondary airports, and delays at Logan dropped. Proponents of congestion pricing have noted that others besides those who can afford to pay congestion pricing costs can share in the benefits through an appropriate distribution of any revenues generated. A part of these revenues will be needed to administer the system—for example, to collect tolls. However, existing projects also contain a few examples of situations in which the revenues generated from congestion pricing have been used to benefit other transportation alternatives. For example, the revenue from the HOT lane project in San Diego has been sufficient not only to pay for toll takers and other administrative expenses, but also to fund the operation of a new express bus service. This has increased travel choices for all area commuters, including lower-income populations. International experiences with congestion pricing have been somewhat more extensive and revenues generated from congestion tolls have been substantial. In Singapore, only about 12 percent of the revenue generated from their cordon-based tolls have been needed to cover the costs of operation. In Trondheim, Norway, revenues have exceeded capital and operating expenses of the toll facility by 5 times. Trondheim’s toll facility currently generates about $25 million in profit per year. These profits have been used to enhance the capacity of the entire transportation system, including financing additional road infrastructure as well as subsidizing public transportation facilities and services, and pedestrian and bicycle facilities. There is some encouraging evidence with regard to mitigating equity and fairness issues in implementing congestion pricing, although the extent to which these concerns can be mitigated is unclear. At least one project we reviewed indicates that implementation of congestion pricing needs to be carefully evaluated as an alternative in some circumstances, because it provides no automatic guarantee of benefits. In Lee County, Florida, the county instituted variable tolls on two bridges based on peak travel periods. The county reduced the toll for using the bridges in off-peak periods. On one bridge, traffic increased during the off-peak period but decreased very little during the peak period. A study from the University of South Florida found that peak-period demand for the bridge was not as flexible as compared to demand during off-peak periods. That is, drivers at peak periods may not have readily available alternatives to commute at different times, use a different mode of transportation, or take another route, and therefore have little choice but to use the bridge during the peak period, or the price of the congestion toll was set too low to influence the demand of those users. The example illustrates the fact that a pricing mechanism may not be very effective at reducing peak-period travel if the price is not set properly, or without additional measures that provide travelers with other choices. Although the congestion pricing projects we reviewed produced little evidence of congestion reductions in adjoining lanes or in other alternative routes, they also produced little evidence that congestion increased in the non-tolled lanes or on alternative routes. For example, while the value pricing projects in California and Texas resulted in less congested alternatives for individuals willing to pay the toll, only one of the projects was able to demonstrate any decreases in congestion on the remaining “free” lanes of the highway. In Orange County, California, a study found that opening two new lanes, which were designated as congestion toll lanes, decreased delays on the other “free” lanes from 30-40 minutes to about 12-13 minutes, while traffic remained stable on alternative nearby freeways. However, there is also some evidence that pricing can increase congestion on alternative routes. In Singapore, where the city used cordon pricing, there was deterioration in traffic conditions just outside the cordoned area caused by travelers attempting to bypass it. Such congestion would adversely impact individuals who do not pay the toll or individuals not using the congested facility. However, at least one study said that the costs of increased traffic on alternative routes did not outweigh the benefits of reduced congestion in the cordoned area. There are other encouraging signs in relation to distributional impacts from existing projects, although there is no conclusive evidence on the distributional impacts of congestion-pricing techniques. A report on the value-pricing project in Orange County found that there was significant usage of the toll facility by individuals at all income levels. This demonstrates that low-income individuals also value the time they save, and that some value their time enough to be willing to pay a toll that amounts to a higher percentage of their income than that paid by individuals with greater income. However, in value-pricing pilot projects in Orange County, San Diego, and Houston, those using the toll lanes tended to have higher incomes than those using the adjoining lanes. Experts have noted that tolls might become more acceptable to the public if they were applied to new roads or lanes as demonstration projects, so that tolls’ effectiveness in increasing commuter choices could be evaluated. For example, in the Orange County pilot project, where two new toll lanes were added to the highway, opinion surveys have shown a high rate of public acceptance. Other pilot projects in Houston and San Diego have also demonstrated public satisfaction. In addition, recent proposals, such as FAIR lanes and HOT networks, show promise to further mitigate equity and fairness concerns. FAIR lanes, as previously discussed, and which have been proposed in New York, would credit users of the adjoining lanes, using revenues generated by the toll lanes, allowing those users to use the toll lanes on another day for a reduced or no charge. The HOT network proposal couples HOT lanes with bus rapid transit initiatives, similar to the experience of the pilot project in San Diego, thereby using the revenues from the tolls to broaden the transportation alternatives available for all commuters, including lower-income populations. Traffic on already congested surface, maritime, and air transportation systems is expected to grow substantially over the next decade. This congestion can be considered a shortage; it occurs when more services— from lanes of highway, airport runways, locks on rivers—are demanded than can be supplied at a given time and place. A range of approaches and tools must be applied to solve the pervasive transportation congestion problems that our nation faces in the next decade and beyond. Congestion pricing—although only one of several approaches that can be used to reduce congestion on our nation’s roads, airways, and waterways—shows promise in reducing congestion and better ensuring that our existing transportation systems are used efficiently. Pilot projects and experiences with congestion pricing abroad demonstrate the promise of this approach for reducing congestion and promoting more efficient use of transportation systems by users. Despite this promise, there continue to be concerns over fairness and equity in the application and implementation of congestion pricing, which current projects have not fully alleviated. Some proposed projects, such as FAIR lanes, which use revenues generated to compensate other users of the transportation system, could help alleviate some of the fairness and equity concerns that have been raised. Experts suggest and some projects demonstrate that public opposition to congestion pricing will lessen as these projects show that equity and fairness concerns can be mitigated. However, if congestion pricing is to be more widely applied to transportation systems, the Congress will need to ease statutory restrictions on the use of congestion-pricing applications on transportation systems. For further information on this statement, please contact JayEtta Hecker at (202) 512-8984 or [email protected]. Individuals making key contributions to this report include Nancy Barry, Stephen Brown, Jay Cherlow, Lynn Filla Clark, Terence Lam, Ryan Petitte, Stan Stenersen, Andrew Von Ah, and Randall Williamson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The nation's transportation systems have become increasingly congested, and pressure on them is expected to grow substantially in the future. Most transportation experts think a multifaceted approach is needed to address congestion and improve mobility. One potential tool is congestion pricing, that is, charging users a toll, fee, or surcharge for using transportation infrastructure during certain peak periods of travel. Pilot projects to test this approach are currently under way in the United States and the technique has been used more extensively abroad. Interest in the usefulness of congestion pricing has been growing, as evidenced by several recent proposals. However, there have also been concerns raised about the fairness of such practices to some users of transportation systems. GAO was asked to identify (1) the potential benefits that can be expected from pricing congested transportation systems, approaches to using congestion pricing in transportation systems, and the implementation challenges that such pricing policies pose, and (2) examples of projects in which pricing of congested transportation systems has been applied to date, and what these examples reveal about potential benefits or challenges to implementation. This statement is based on prior GAO reports and other publicly available reports. Congestion pricing can potentially reduce congestion by providing incentives for drivers to shift trips to off-peak periods, use less congested routes, or use alternative modes, thereby spreading out demand for available transportation infrastructure. Congestion pricing also has the potential to create other benefits, such as generating revenue to help fund transportation investment. Possible challenges to implementing congestion pricing include current statutory restrictions limiting the use of congestion pricing, and concerns about equity and fairness across income groups. In theory, equity and fairness concerns could be mitigated depending on how the revenues that are generated are used. Evidence from projects both here and abroad shows this approach can reduce congestion. Such projects have also shown they can generate sufficient revenue to fund operations--and sometimes fund other transportation investment as well. However, projects were not necessarily able to demonstrate benefits for the full range of transportation users. For example, those who were able to use the special freeway lane saw a decrease in travel time. But, in some cases, there was little systemwide reduction in travel times, and congestion increased on alternative routes. Nonetheless, there is some evidence that equity and fairness concerns can be mitigated. Some projects have shown substantial usage by low-income groups, and other projects have used revenues generated to subsidize low-cost transportation options. In addition, some recent proposals for refining congestion-pricing techniques have incorporated further strategies for overcoming equity concerns. For example, the Fast and Intertwined Regular (FAIR) lanes proposal in New York suggests crediting users of the non-tolled lanes to partially pay for them to use public transportation, or to use the express lanes on other days.
|
CNCS provides grants and technical assistance to organizations throughout the United States to strengthen communities and foster civic engagement. In fiscal year 2015, CNCS received appropriations totaling over $750 million to fund a variety of grant programs, as shown in table 1. CNCS grants are typically for multiple years in duration. In 1993, the National and Community Service Trust Act of 1993 was enacted, which created CNCS and established the AmeriCorps State and National grant programs. This law also gave the agency responsibility for administering VISTA and Senior Corps. In 2009, the Serve America Act was enacted, which gave CNCS responsibility to administer several newly-established programs including the Social Innovation Fund. This act also directed CNCS to focus more on evaluating its programs’ performance, and in 2012 we reported on the extent to which CNCS funded activities were covered by its performance measures and on performance measurement challenges. In addition, the Serve America Act generally requires that grantees conduct criminal history checks on volunteers and program employees in national service programs. Criminal history check regulations have been in effect since November 2007 and were expanded after the enactment of the Serve America Act to all national service programs. Beginning April 21, 2011, the law generally required that entities conduct three-part checks—including Federal Bureau of Investigation, statewide registry or repository, and sex offender registry checks—on individuals who will have recurring contact with vulnerable populations. CNCS’s Chief Risk Officer, Chief of Program Operations, and Chief Financial Officer share responsibility for assessing and monitoring the agency’s grants. Program officers and grant officers implement grant monitoring activities and review grant applications. Program officers, located in program offices overseen by the Chief of Program Operations, focus on issues related to grantee performance and compliance with program objectives. Grant officers, overseen by the Chief Financial Officer, focus on grantees’ financial issues and performance. (See fig. 1.) Federal law requires federal agencies administering programs identified as susceptible to improper payments to estimate the improper payments made by those programs and report annually on their efforts to reduce improper payments. An improper payment is defined by statute as any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. In addition, the Office of Management and Budget’s (OMB) guidance instructs agencies to report as improper payments any payments for which insufficient or no documentation was found. Since fiscal year 2012, the OIG has reported annually that the agency faced challenges complying with improper payment laws. For example, the OIG found that CNCS did not complete valid fiscal year 2015 improper payment assessments for two of its programs: Senior Companion Program (a Senior Corps program) and Social Innovation Fund. CNCS program and grant officers are responsible for implementing the agency’s grant monitoring process, which includes activities from the time before the grant award is made to when the grant is closed out, as shown in figure 2. Pre-Award. According to CNCS policy, during the pre-award phase (before CNCS makes a grant award), grant officers are to assess the applicant’s financial management capabilities and other aspects, such as whether the grantee has any open audit findings on any current or prior grants. CNCS officials generally are to perform these reviews before making an award to a new or current grantee to determine if the grant should be made. Annual Assessment. To implement the annual assessment (performed between August and October) CNCS personnel are to take the following steps: Step 1: determine which grants will be assessed. The universe of grants to be assessed is to include all grants that are active at the time CNCS is ready to begin the assessment, typically in mid-August of each year, and that are expected to be active during the following fiscal year. Step 2: assess each grant in the universe identified in step 1 for potential vulnerabilities related to program compliance, financial weakness, or other issues. Step 3: rate each grant as a high-priority, medium-priority, or low- priority for various monitoring activities, based on the assessment in step 2. Step 4: prepare an annual monitoring plan that identifies which grants will receive monitoring visits, desk reviews, or financial reviews during the coming year, based on the ratings in step 3. The plan is to be prepared by the end of October. More specifically, during step 2 of the annual assessment phase, CNCS program and grant officers are to jointly assess each grant on 19 criteria that are intended to reflect potential vulnerabilities related to program compliance, financial weakness, or other issues. They are to enter their responses into eGrants, which applies certain weights to each criterion and calculates a score for each grant, up to a possible total of 760 points. During step 3, CNCS officials group grants into three categories based on the grant’s total score to determine the grant’s monitoring priority: high- priority (150 points or more), medium-priority (80 to 149 points), and low- priority (0 to 79 points). According to CNCS policy, each grant recipient must generally receive a compliance visit by either the program or grant officer every 6 years, so the grant is to be rated high priority for monitoring if 5 years or more have elapsed since the last visit. Also, if a grant receives a high-priority rating, CNCS program officers must generally conduct a compliance visit, or an on-site training and technical assistance visit in the upcoming fiscal year. If a grant receives a medium- or low-priority rating, CNCS program officers can conduct a visit, with supervisory approval, if they determine that it could help address known issues or help prevent future problems. To illustrate CNCS’s process, we reviewed CNCS grant data for fiscal year 2015 (the most recent complete fiscal year of data available at the time of our review). We found that in August 2014, CNCS had identified a universe of 2,188 grants that were active at the time and expected to be active during fiscal year 2015, and had assessed these grants to plan its monitoring activities for fiscal year 2015. As shown in figure 3, CNCS rated about 14 percent of these grants overall as high-priority for monitoring, with some variation across the agency’s grant programs. For example, 17 percent of AmeriCorps State and National grants included in the assessment were rated high-priority, compared to 9 percent of VISTA grants. Finally, during step 4 of the annual assessment phase, CNCS officials are to set the monitoring plan—which consists of compliance and other types of visits, desk reviews, and financial reviews (such as drawdown analyses)—for the fiscal year. A compliance site visit occurs on-site where the grantee does its work, and reviews a wide range of compliance issues following a structured protocol. Other types of visits which may be included in the monitoring plan include: follow-up visits, conducted to follow-up on a previous compliance site visit, desk review, or a targeted / issue-based site visit; targeted / issue-based site visits, conducted to address specific issues, such as observing CNCS providers delivering training/technical assistance services; and training and technical assistance visits, conducted when the grantee is a new grant recipient or there is a new program director, among other situations. Desk reviews also address compliance issues and can be targeted or comprehensive in scope. For example, a program officer may conduct a desk review of a grantee’s AmeriCorps position description to ensure the description complies with legal requirements and agency policy. Drawdown analyses are financial reviews that are conducted to determine whether certain grantees are drawing down their funds in a timely manner and whether the rate at which they draw down their funds is consistent with the length of time (period of performance) for their award. For example, if a grant has completed 50 percent of its period of performance, then one would expect to see about 50 percent of funds drawn down. Monitoring. During the monitoring phase (performed between October and August), program and grant officers generally are to implement the monitoring activities described in the plan, but can either add or omit an activity with supervisory approval for a number of reasons. For example, program and grant officers may consider adding a monitoring activity for a grant after the initial monitoring plan is set based on issues such as that the OIG has identified findings with the grantee suggesting that the grantee might be spending grant funds inappropriately. Also, CNCS officials told us there are instances when a compliance visit is canceled, which creates an opportunity to add a grant to the site visit schedule. To illustrate, our analysis of CNCS grant data for fiscal year 2015 indicates that the initial monitoring plan called for 358 grants to receive a compliance visit; by the end of the fiscal year, 385 had been conducted (see table 2). Most compliance visits were conducted with Senior Corps and VISTA grants (programs with the highest number of grants). When conducting a compliance visit, CNCS personnel—typically the program officer—are to use a standard protocol to interview grantee staff regarding compliance with program regulations and policy, including financial accounting for grant funds. During these interviews, CNCS personnel also are to review grantee documentation indicated by the protocol to verify compliance. For example, program officers may ask grantee staff how they are complying with the criminal history background checks required by the Serve America Act, and may ask for documentation such as reports received from law enforcement agencies. Close-out. Finally, the close-out phase is to take place at the end of a grant’s performance period. During this phase, CNCS officials are to close out the grant by reviewing the recipient’s transactions and expenditures reports, and reconciling them with records on the amounts disbursed under the grant. Grantees may also submit a final programmatic report of activities under the grant. Improper payments. CNCS has a separate process for estimating improper payments, although many of the items reviewed during traditional grant monitoring are also reviewed during improper payment reviews. To determine the extent of improper payments being made through its grant programs, CNCS’s process calls for selecting a random sample of Federal Financial Reports (the standard form approved by OMB that federal agencies use to collect financial information) from each program. The sampling is conducted such that records with a higher dollar value have a proportionally higher chance of being included. For each sampled Federal Financial Report, CNCS selects transactions and reviews and tests related documentation in place at the time of the payment. Because most of CNCS’s grant funds go to personnel-type costs, CNCS primarily evaluates payments by grantees to individuals working for or serving with that grantee, and verifies the eligibility of the individual to receive those payments according to law, based on the documentation produced at the time of the review. If all the required documentation is not provided, the payment is considered an improper payment in the estimation process. For example, in fiscal year 2015, based on documentation provided during the improper payment review, the Americorps State and National program was estimated to have $14.5 million in improper payments, mostly due to lack of documentation confirming that criminal history checks were completed prior to making 15 payments to individuals. However, subsequent to the review, CNCS confirmed that in several cases, individuals receiving payments were fully eligible at the time of payment even though the criminal history check was not yet complete, and that although some individuals received payment prior to their documentation being provided, no payments were made to ineligible individuals. In a 2016 report, the OIG recommended that CNCS take action to improve its methodology and reporting on improper payments, as well as to implement procedures to hold grantees accountable for providing documentation. In response, CNCS said it would take several steps to improve its process for estimating improper payments, such as updating its statistical sampling plan and revisiting the improper payment testing and reporting approach, and developing more training. In addition, as of fiscal year 2016, responsibility for conducting reviews and other activities pursuant to improper payment laws was moved to the newly established Office of the Chief Risk Officer (OCRO). Created in fiscal year 2016, the OCRO is now responsible for overseeing and collaborating with agency program and grant offices to develop and implement CNCS policies, procedures, and guidance related to the agency’s risk framework, and to coordinate the development and implementation of documentation and reporting processes, including the improper payment review. The OCRO is also responsible for developing and for delivering select training and providing technical guidance and support to CNCS staff regarding the implementation of the annual assessment. In addition, according to CNCS officials, further changes to the grant monitoring process are anticipated as the agency implements Enterprise Risk Management (ERM). In 2014, OMB recommended that agencies consider adopting ERM, which is an approach for addressing the full spectrum of risks and challenges related to achieving the agencies’ missions. In 2016, OMB issued the revised Circular No. A-123, which required that agencies begin to implement ERM in fiscal year 2017. CNCS’s grant monitoring process includes efforts to identify and mitigate risks but does not fully align with relevant internal control principles for risk assessment, control activities, and monitoring (see fig. 4). Specifically, the agency’s annual assessment process may not result in the riskiest grants receiving a high-priority for monitoring because of limitations in its scoring model. Also, the annual assessment process does not ensure all grants are included. Further, the agency’s monitoring of grantees’ oversight of subrecipients is limited. Finally, CNCS has not systematically evaluated its monitoring efforts to identify opportunities to improve its assessment of and response to risks. CNCS’s indicators used in its scoring model have limitations that may result in the riskiest grants not receiving a high priority designation for monitoring and that do not meaningfully cover all identifiable risks, such as fraud and improper payments. This is largely because the scoring model is designed to support CNCS’s monitoring policy by identifying grants due for a monitoring visit, rather than for specifically assessing risk and using this risk information to drive prioritization of monitoring activities. To this end, we found that CNCS’s process for assessing and monitoring its grant portfolio has a number of limitations that prevent it from being fully aligned with the internal control standard stating that management should identify, analyze, and respond to risks related to achieving the defined objectives. First, we found that some indicators that are not based on risk are given considerable weight in the rating process, while others that are based on risk are given much less weight—far less than the 150 points needed to be designated high priority for monitoring. As shown in figure 5, only one indicator is given sufficient weight on its own to result in a grant being assigned a high-priority for monitoring: “time since last on-site compliance visit” (150 points). According to CNCS guidance, this indicator supports the agency’s efforts to ensure compliance monitoring visits are conducted every 6 years, in accordance with agency policy. Agency guidance further states that this indicator is important because potential vulnerabilities may increase as time between visits lengthens. It is important to conduct monitoring visits periodically as required; however, as noted by one officer we spoke with, the amount of time elapsed since the last visit does not necessarily indicate risk. In contrast, the indicator for “prohibited activities” is given much less weight (30 points), even though prohibited activities constitute a significant risk because they are an unallowable use of federal funds. As a result, a check for this indicator may only result in a grant being assigned medium-priority for monitoring and, according to CNCS policy, a monitoring visit or desk review is not required for a grant that receives a medium-priority rating. Of the 25 grant awards with “yes” on the prohibited activities indicator in fiscal year 2015, nearly two-thirds were designated as medium priority, and 17 of the 25 did not receive a compliance visit or other monitoring activity that year. Similarly, the indicator for “financial competency” is given even less weight (20 points). A grant would get a “yes” on this indicator if a bankruptcy filing had been made or an intent to file had been announced, or if another federal or state agency had notified CNCS regarding a weakening of an organization’s financial competency. However, a “yes” on this indicator only contributes 20 points to the 150 needed to be designated high priority for monitoring. Second, we found that several potential risk factors were included in a single indicator: “other key concerns and challenges” (80 points). According to CNCS’s scoring model, a grant would receive a score for this indicator only once, even if it demonstrated the potential for multiple risks. For example, this indicator includes open compliance findings, improper payment findings, and the potential for financial management problems. This indicator also includes any findings from the pre-award review, which CNCS conducts under federal grants management guidance established by the Office of Management and Budget (OMB), referred to as the Uniform Guidance. A grant that receives a “yes” for this indicator in the scoring model would not receive a high-priority designation on its own, regardless of the severity of the risk or how many of these concerns are noted. Several CNCS grant and program officers we spoke with noted that it may be more useful if some of these potential risks stood alone rather than grouping them together in a single indicator. For example, to give improper payment findings greater weight, one officer suggested there should be an indicator solely based on these findings. Other officers suggested that compliance findings regarding criminal history checks should be elevated to its own indicator, to flag potential or past compliance problems in this area. Another officer suggested that compliance findings regarding concerns with supervision of volunteers should be highlighted through an indicator of its own, because this can indicate problems with prohibited activities. Third, we found that some indicators may not be calibrated effectively to capture risk. For example, CNCS’s scoring model includes an indicator that identifies whether a grantee has problems with volunteer retention (“participant enrollment and retention”), but this indicator is only marked “yes” if the retention is below 50 percent for 1 year, or 75 percent for 2 years. AmeriCorps State and National program office staff we interviewed told us that 50 percent retention is considered very low, and they would prefer to intervene before performance had dropped to this level. Lastly, we found that several indicators are too frequently applicable to be useful in distinguishing relative risk among grants. For example, 4 of the 19 indicators were checked ”yes” for more than a quarter of the grants assessed, which could indicate that they have minimal impact in distinguishing among grants to determine their priority status for monitoring. One indicator—”multiple awards”—was checked for nearly half the grants assessed. In addition to these limitations in the scoring model, CNCS has not clearly documented its assessment scoring process, although internal controls suggest that documentation could contribute to the effectiveness of this activity. The rationale is unclear for why CNCS has defined its indicators and assigned their weights the way it has, and outcomes from this process are not well-documented. CNCS has a monitoring workgroup that is charged with, among other duties, reviewing the indicators used for the annual assessment and determining whether the point value for each indicator needs to be changed. However, CNCS does not have documentation on the criteria used for selecting the 19 indicators or determining their weights. Decisions on changes to the scoring are also not well-documented, and it is unclear how the group decides which indicators are most important. CNCS officials said that the original documentation on indicator selection was not maintained because it was outside the records retention time frame, and in recent years, limited staff capacity contributed to the agency not documenting its decisions on changes to the indicators. Improving documentation on the rationale for CNCS’s decisions on indicators and scoring could help the agency revise its indicators in the future to improve their relevance and effectiveness. CNCS has begun various efforts that could lead to improvements in the scoring model used to support the agency’s assessment process, but all of these efforts are in the early stages of development and their ultimate effect is not yet clear. For example, according to the agency’s first Chief Risk Officer, who came on board in April 2016, the OCRO is undertaking an effort to benchmark CNCS’s assessment criteria and process against other federal agencies and programs with similar grantee profiles (that is, the agency or program funds grantees with varying levels of financial, administrative, and staff capacity). As of July 2016, the OCRO had gathered information from six federal programs and planned to contact several more. In addition, officials said that the Field Financial Management Center, one of CNCS’s two grant offices, had also begun development of a pilot to develop additional indicators of risk, based on a review of past performance of 10 Senior Corps grants and analysis of related data. They said they hope that the results of the pilot will inform future changes to the assessment process, but it is too early to tell at this point how relevant the results will be to other programs. Finally, officials also told us that, as part of the agency’s plan for implementing Enterprise Risk Management (ERM) under OMB’s revised Circular A-123, CNCS held listening sessions with senior management in spring 2016 to gather their perspectives on key agency risks and began conducting similar sessions with CNCS staff in fall 2016. Officials said that they plan to use this information to create an agency-wide risk profile. Also, in response to the focus on fraud in OMB’s revised circular, CNCS had included fraud as a topic in its listening sessions with senior management. However, it remains to be seen the extent to which identification of these top risks agency-wide, including the potential for fraud, will result in changes to the assessment indicators for grant monitoring. As of September 2016, CNCS had not included any indicators of fraud risk in its assessment process. CNCS does not have a policy to ensure all grants, including those awarded after the annual assessment is complete, are assessed for potential risk in the current year. Delaying grants from being included in this process limits CNCS’s ability to identify and analyze the significance of certain risks in its grant portfolio, which is inconsistent with internal controls for risk assessment and control activities. As described earlier, CNCS’s policy and procedures call for its assessment process to begin in August each year to guide its monitoring activities for the fiscal year that will start in October. CNCS determines the universe of grants to be assessed and uses its tool of 19 indicators to assign a priority rating category for monitoring the grants in this universe. CNCS officials said that new grants are particularly vulnerable to being omitted from the assessment process, as these grants tend to be finalized just before the beginning of the new fiscal year when the annual assessment may already have been completed. However, without including new grants in the annual assessment process, CNCS cannot identify and analyze the significance of these risks, and use this information to prioritize these grants for monitoring activities, such as a site visit. As noted by one program officer we spoke with, an initial visit to a first-time grantee can identify issues that would not have otherwise been raised, and can prevent future problems. Officials acknowledged that the agency does not have a policy regarding how and when to assess grants made after the annual assessment process is conducted in August, and that in practice, these grants are not assessed using its tool of 19 indicators until the following fiscal year’s assessment process if the grant is expected to be active for a second year. This was the case with one grant included in our selected review of eight grants with negative outcomes. The grant, funded at over $200,000 in each of 3 years, was awarded on September 30, 2012, but was not assessed until the summer of the following year. Once assessed, it was deemed high-priority, but was nevertheless subsequently relinquished by the grantee in light of non-compliance findings. Had the grant been assessed as high-priority in its first year, and had it been assigned monitoring activities accordingly from the beginning, the compliance problems may have been avoided. Due to data limitations, we were unable to determine the extent to which new grants were not included in the annual assessment and received no monitoring in fiscal year 2015. However, using data on monitoring activities, we identified 44 newly-awarded grants that were not assessed, but nevertheless had received monitoring. It is unclear how CNCS determined that these grants, but not others, warranted monitoring. Without establishing and implementing a policy to ensure that all grants are assessed for potential vulnerabilities in the current year, CNCS may not be using its monitoring resources most effectively, focusing on the highest-risk grants. CNCS conducts limited monitoring of grantees’ oversight of their subrecipients, despite the large amount of grant dollars involved and evidence indicating that subrecipient oversight is a key risk area. According to CNCS, two of CNCS’s largest programs, AmeriCorps State and National and Social Innovation Fund (SIF), allow their grantees to have subrecipients, involving a significant amount of grant funds— almost half of CNCS’s total grant budget in fiscal year 2015. Data included in the agency’s fiscal year 2015 financial report shows that CNCS awarded about $300 million in grants to state commissions that subgrant the funds to organizations in the states to run AmeriCorps programs. Similarly, the report states that CNCS awarded the entire portfolio of SIF grants, about $70 million, to intermediaries that are required to make subawards to other organizations. Subrecipient oversight also has been identified as a key risk area. For example, in prior work, GAO has concluded that effective practices for overseeing subrecipients are a critical element of ensuring grant funds are used for intended purposes. OMB’s Uniform Guidance established oversight requirements for pass- through entities, such as CNCS’s state commissions, that provide funds to other organizations. CNCS conducts monitoring activities to review grantees’ compliance with these and other requirements. However, an OIG investigation identified concerns about CNCS subrecipients, indicating that additional subrecipient oversight may be needed. Among the selected program and grant officers we interviewed (who were responsible for monitoring the eight grants in our nongeneralizable sample of grants with negative outcomes), four identified problems with grants resulting from issues with subrecipients. For example, one CNCS officer told us about a grantee that lost nearly half of its subrecipients over 2 years (from 11 subrecipients to 6); another said the grantee could not implement its program because of problems with subrecipients. To monitor their grantees’ oversight of subrecipients, CNCS programs with subrecipients have developed provisions for subrecipient oversight as part of their programs’ standard protocols. However, we found these provisions to be limited in certain areas, such as criminal history checks (see table 3), even though, in fiscal year 2015, CNCS reported that nearly all of its estimated reportable improper payments stemmed from problems with conducting or documenting criminal history checks. For example, the AmeriCorps State and National program monitors a grantee’s oversight of subrecipient compliance with criminal history checks by reviewing a total of 25 volunteer and staff files, but there is no requirement to select the files based on the number of subrecipients, or size of grant. Officials told us that the largest AmeriCorps State grantee had five grants with a total of 55 subrecipients. Further, the SIF program’s protocol requires review of 3 employees from a minimum of 3 different subgrantees or subrecipients to gauge compliance; however, the SIF program has had one grantee with as many as 47 subgrantees, and four grantees with 20 or more subgrantees. As a result, the monitoring approach of these CNCS programs may cover only a small portion of their grantees’ subrecipients in some cases. In addition, both programs check that the grantee has a plan for overseeing its subrecipients, but the AmeriCorps State and National program protocol does not require a review of the results of these activities, such as subrecipient progress reports or monitoring findings. CNCS officials acknowledged that ensuring grantees are overseeing subrecipients is an ongoing challenge. One officer we interviewed suggested that it could be helpful for CNCS to obtain more information from its grantees on how they are managing their subrecipients. Gathering additional information about subrecipients by improving monitoring protocols could help CNCS to ensure that its grantees are overseeing subrecipients appropriately. CNCS officials said they have some plans to update monitoring protocols to address certain risks, but did not provide timeframes for doing so, and it is unclear whether all areas of subrecipient oversight will be addressed. CNCS has not systematically evaluated the results of its monitoring activities, as called for by internal controls. Officials said that the agency’s monitoring workgroup holds an annual discussion of the year’s monitoring activities, but CNCS does not compile monitoring findings, such as types of compliance problems or errors identified, systematically across all programs. Moreover, CNCS has not used data systems for summarizing monitoring findings to use in trend analyses or to evaluate opportunities for improving its monitoring efforts. A senior official said they had not conducted agency-wide analyses of their monitoring efforts because eGrants does not provide reports that include both assessment information and monitoring results for each grant, and because the agency has had limited staff capacity to manually analyze this data given these limitations in reporting. In addition, officials said that CNCS does not have standardized reporting or longitudinal data to facilitate evaluation of the effectiveness of CNCS’s monitoring efforts, although they acknowledged that both would be helpful. With respect to assessment information, the “yes” or “no” data captured in eGrants from the annual assessment process for each of the 19 indicators cannot be easily analyzed for trends. For example, for the indicator on grantee volunteer retention, the eGrants system records a “yes” or “no” for each grant depending on whether the grantee’s retention is above or below the indicator’s threshold, rather than capturing the actual percentage of volunteers retained. With respect to monitoring results, each program office has a monitoring tool to complete when conducting on-site monitoring, but these tools are not integrated so that the results can be analyzed within the eGrants system. As a result, some program officers we spoke with described manual approaches they sometimes used to assess common findings from their monitoring activities within their programs. For example, they have used compliance visit letters—sent to grantees to summarize compliance visit findings—to determine the effectiveness of monitoring and track trends. CNCS officials told us that, as of November 2016, the agency was developing the requirements for the monitoring component of the agency’s new IT system. In developing these requirements, CNCS has the opportunity to provide additional functionality to support evaluation of its assessment and monitoring efforts. However, in the meantime, a more formal effort to summarize monitoring results across all programs could help identify trends and areas for improvement. CNCS officials also said that they have not fully evaluated the effectiveness of the types of monitoring activities the agency conducts and whether a different mix of these activities could result in improved monitoring outcomes, although the monitoring workgroup has discussed the need to do so. In particular, officials said that because on-site compliance visits can take up to three days and can be very costly, it would be helpful to have a lower-cost alternative, such as a desk review. According to CNCS policy, monitoring may consist of a visit or a desk review. However, we found that in fiscal year 2015, of the 506 grants that received a monitoring activity, only 9 received desk reviews. Several grant and program officers we spoke with commented that greater use of desk reviews could be helpful for identifying and responding to potential risks. CNCS has not conducted the strategic workforce planning necessary to determine whether it has the capacity—including both people and resources—to effectively monitor grantees’ compliance with grant program requirements. CNCS included plans to conduct a strategic workforce planning process in the agency’s strategic plan for fiscal years 2011 through 2015, but these plans were not implemented. Further, CNCS does not have a training planning process aligned with agency goals and competencies to help ensure program officers who have similar grant monitoring responsibilities receive the same needed training to perform their jobs effectively. CNCS has not developed or documented a strategic workforce planning process as key principles for effective strategic workforce planning suggest. Without such a planning process, CNCS’s efforts to address gaps in staffing due to attrition have been ad hoc and reactive. In addition, CNCS has not established whether differences in workload among grant and program officers in different programs and locations are reasonable, or if these differences also result from ad hoc responses to attrition. Internal controls suggest that agencies demonstrate commitment to various workforce planning activities, such as succession planning, so that vacancies in key roles are filled with competent staff and that the entity can continue achieving its objectives. CNCS has experienced considerable attrition over the past few years in positions that affected the agency’s capacity to conduct key monitoring functions, including staff in the OCRO (previously the Office of Accountability and Oversight), program and grant officers, and other officials at high levels within the agency. Specifically: OCRO: Nine of 13 staff in OCRO separated from CNCS in fiscal year 2015. According to the CNCS OIG, as a result of this attrition, as well as the departure of key OCRO staff members at the beginning of fiscal year 2016, this office had few, if any, staff members with sufficient training or experience in grant monitoring and improper payment assessments, among other responsibilities. Further, in its fiscal year 2015 financial report, CNCS reported that insufficient staff in this office limited its ability to conduct improper payment reviews. More recently, CNCS officials noted that the only staff person in the OCRO with the knowledge to interpret reports on aggregated data from the grant assessment component of CNCS’s eGrants information system left the agency in August 2016. At this time, CNCS was also completing fiscal year 2016 monitoring and beginning the fiscal year 2017 process, which officials said was challenging in the absence of OCRO staff with eGrants expertise. Officials also acknowledged that due to staffing and leadership changes, reviews of eGrants data had become infrequent. Program and grant officers: During fiscal year 2015, vacancies occurred in 15 program officer and 4 grant officer positions. Officials said that CNCS’s efforts to refill positions were handled on an ad-hoc basis, without a strategy for addressing turnover trends throughout the agency to maintain critical skills for monitoring and oversight. Our analysis of CNCS data indicate that these vacancies had an impact on the number of monitoring activities conducted in fiscal year 2015. Specifically, at least 58 desk reviews, drawdown analyses, and monitoring visits planned for fiscal year 2015 monitoring were not completed or were delayed because of staff shortages, in particular the loss of two grant officers. Several program and grant officers told us that turnover had also affected their workload, often requiring them to manage additional grants. For example, one grant officer said that in addition to his responsibilities for monitoring Senior Corps and VISTA grants, he took on monitoring responsibilities for 40 to 45 additional grants when a grant officer left the agency. High-level officials: CNCS has also experienced considerable turnover of senior officials over the past few years. Between fiscal years 2012 and 2015, the Director of Accountability and Oversight, Chief Financial Officer, Chief Information Officer, Chief Human Capital Officer, and General Counsel departed the agency. In addition, the Director of the Office of Grants Management and the Director of the Field Financial Management Center also left during this time. In a previous report, we examined best practices for workforce planning based on a review of studies by leading workforce planning organizations and federal agency workforce planning practices, and concluded that a strategic workforce plan is essential to addressing two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals; and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. In that report, we identified five principles that agencies should follow for effective strategic workforce planning: leadership involvement; determining critical skills and competencies; developing strategies to address gaps; building capacity to support workforce planning; and monitoring and evaluating progress toward human capital goals and programmatic results. We found that CNCS’s most recent strategic plan (for 2011-2015) had called for developing and implementing a strategic workforce plan that reflects a workforce assessment, identifies new competencies, and includes an ongoing assessment of CNCS’s key work requirements; however, as of fall 2016, the agency had not yet taken these actions, and the agency does not have a strategic plan in place for fiscal year 2016 or beyond. CNCS officials said that they recognize the importance of strategic workforce planning—which can help with key agency functions such as grant monitoring—and that efforts to refill vacancies had been ad hoc, but that they had not yet had an opportunity to begin such planning because of limited resources. In October 2016, officials said they would begin strategic workforce planning soon, but could provide no time frames or documentation about its planned efforts. In addition, we found that the planned monitoring workloads and responsibilities of program and grant officers varied across the agency, and that CNCS had not evaluated whether these differences reflect an appropriate deployment of resources to monitor grantees effectively. For example, data from eGrants showed that, across CNCS’s offices, program officers’ workloads ranged from 1 to 13 planned monitoring activities, while grant officers’ workloads ranged from 1 to 44 planned monitoring activities. Some grant officers and program officers we interviewed noted that not all grants require the same amount of time to monitor. For example, certain grantees require more day-to-day communication, depending on such things as their capacity, experience, and number of subrecipients. Program and grant officers also described recent changes to their workloads resulting in part from more comprehensive grant application reviews. A CNCS program management official told us that increasing responsibilities for grant application reviews, resulting from new initiatives and partnerships with other federal agencies, had reduced the amount of time available for grant monitoring. While the program or office where an officer works might explain some workload differences, CNCS officials told us that they have not evaluated whether staff have been deployed to the offices where they are most needed for grant monitoring. As a result, it is unclear if workload differences are reasonable or if they are affecting the agency’s capacity to monitor grantees effectively. Senior officials said that the grant monitoring workload distribution is based on the results of the annual assessment process, but they do not balance the number of high-value, high priority grants for monitoring across grant and program officers. Instead, their reviews of the effectiveness of the agency’s workload distribution to meet its monitoring objectives have been ad hoc. In the absence of a strategic workforce planning process that fully incorporates and is consistent with key principles for effective strategic workforce planning, CNCS’s efforts to address gaps in staffing and to deploy program and grant officers where they are most needed may continue to be ad hoc and reactive. CNCS does not ensure that all program officers with similar grant monitoring responsibilities are offered or receive the same needed training to perform their jobs effectively, as suggested by internal controls. Although program officers have been asked to conduct fiscal monitoring, not all CNCS offices have planning processes that ensure their program officers receive training on this topic. In addition, CNCS’s planning process for training is not aligned with agency goals and critical competencies, which was identified in GAO’s guide for assessing training as a way to focus strategically on improving performance. We reviewed CNCS’s training plans for new staff and lists of training offered to staff in each office and found that training offerings vary by office, including training on certain key areas of grant monitoring responsibility (see table 4). Program officers in the AmeriCorps State and National program office generally did not receive training on fiscal monitoring, even though they have been asked to conduct fiscal monitoring. Similarly, in fiscal year 2015, grant officers did not receive ongoing training on CNCS’s grant monitoring practices, although officials said that grant officers received this training in fiscal year 2016. While officials said that program officers in the Office of Field Liaison received training on fiscal monitoring, four program officers that we interviewed said that they needed additional training on fiscal monitoring, in part to help them better understand fiscal terminology and review financial documents. One of the four program officers noted that because she is not as familiar with fiscal monitoring, she sometimes had to contact her grant officer with fiscal compliance questions while conducting a monitoring visit. In addition, a grant officer told us that training on CNCS’s grant monitoring tool, including specific examples of information that could be provided to respond to questions on the tool, would help grant and program officers meet their monitoring responsibilities more efficiently and effectively. Officials told us that variation in training opportunities occurred because each program and grant office is responsible for planning training for their staff at the individual level, and practices varied. At the office level, each CNCS office meets with the Office of Human Capital to discuss skills gaps, needs, and priorities of staff. For example, they consider such issues as training required by law and the number of staff in need of particular training. The Office of Human Capital administers a training needs survey to each office at the beginning of each fiscal year. At the individual level, officials told us that each year, employees work with their supervisors to develop a workplan that includes training requirements. However, CNCS’s training planning process is not aligned with agency goals and critical competencies, as GAO’s guide for assessing training identified as essential for effective performance. In GAO’s guide for assessing training, we state that it is essential that agencies ensure training and development efforts are undertaken as an integral part of, and are driven by, their strategic performance planning processes. By taking this approach, agencies can help ensure that their training and development efforts are not initiated in an uncoordinated manner, but rather are focused on improving performance toward their goals as laid out in their strategic plan. Well-designed training and development programs, linked to agency goals and to the organizational, occupational, and individual skills and competencies, are needed for the agency to perform effectively. In our review of the documents outlining CNCS’s planning process for training, we found no links to agency goals for grant monitoring and associated competencies for program and grant officers. CNCS officials said that they have not reviewed competencies or assessed gaps in critical skills and competencies agency-wide since 2008, and that they have been continuing to operate under a 2011-2015 strategic plan. In September 2016, CNCS officials said that they plan to build an employee development program, but it is unclear whether this effort will link training to agency goals and competencies for program and grant officers. In the absence of a training planning process linked to current agency goals and competencies, program and grant officers may continue to receive inconsistent training opportunities, and the opportunities provided may not fully address important aspects of their grant monitoring responsibilities. CNCS programs support efforts across the country designed to strengthen communities and foster civic engagement through service and volunteering. To do this, CNCS awards grants totaling hundreds of millions of dollars annually and must ensure that those funds are used in accordance with program rules and federal requirements, such as OMB’s new Circular A-123. In particular, as CNCS implements these new requirements, further emphasis on assessing and responding to risks, as well as taking an agency-wide, strategic approach to its workforce planning, will be key to strengthening CNCS’s ability to effectively monitor its grants and to move toward a risk-based approach to these activities. Although CNCS has an assessment process for prioritizing grants for monitoring activities, there are limitations in the scoring model that underpins this assessment process. These limitations result in a process that does not fully identify potential risks, such as the potential for fraud, or result in the riskiest grants receiving the highest priority for monitoring. Further, some grants are not included in this assessment process, but are monitored regardless and without the benefit of information from the assessment. In addition, available documentation does not indicate how CNCS developed its indicators or their scoring, or how the agency has changed them over time. Taken together, these issues create vulnerabilities for CNCS in its ability to meet federal standards for internal control with respect to risk assessment, control activities, and monitoring principles. CNCS’s efforts to benchmark its assessment criteria and pilot new risk indicators are positive steps in enhancing its approach to assessing risk and determining monitoring priorities; however, these efforts are in their early stages. Going forward, it will be important for CNCS to complete its benchmarking efforts and ensure that information from these efforts is used to address the limitations we identified in the agency’s scoring model and in documenting its decisions about it. Further, CNCS conducts limited reviews of how its grantees oversee their subrecipients, although subrecipient performance is critical to grant success. Reviewing monitoring protocols to ensure that they include collection of information on grantees’ oversight of subrecipients’ activities will help to identify and mitigate any risks posed by subrecipients. Also, CNCS has not evaluated its monitoring activities or gathered data systematically to enable analysis of how well its current efforts assess risk. By reviewing the outcomes and findings from its monitoring activities, CNCS will be better positioned to improve these processes and determine the effectiveness of these activities. CNCS has taken some steps in workforce planning to fill vacancies in monitoring staff and key senior management positions, but the agency has not developed a strategic workforce planning process, which would help to address its workforce challenges at a strategic, agency-wide level. In addition, departures of senior officials and staff with grant monitoring responsibilities have affected the agency’s capacity to conduct key monitoring functions; but these departures were handled on an ad-hoc basis, without efforts designed to maintain critical skills for monitoring and oversight. Meanwhile, staff workloads and responsibilities have changed due to staff turnover and other factors, but CNCS has not reviewed its workload distribution on an agency-wide level. In order to ensure the agency can effectively monitor its grantees, it will be important for CNCS to take a strategic approach to workforce planning in order to address current and future agency needs. Finally, CNCS does not ensure that all program officers have opportunities for or receive training on their grant monitoring responsibilities, particularly for fiscal monitoring. Its training planning process is not aligned with agency goals and competencies, and the agency has not reassessed these competencies in a number of years. Updating competencies for grant monitoring and planning training to address agency goals and critical competencies would help CNCS ensure that its workforce can meet current and future grant monitoring needs. To improve CNCS’s efforts to move toward a risk-based process for monitoring grants and to improve its capacity for monitoring grantee compliance, we are making the following six recommendations to the Chief Executive Officer of the Corporation for National and Community Service: 1. Ensure that CNCS completes its efforts to benchmark its assessment criteria and scoring process to further develop a risk-based approach to grant monitoring and that information from this effort is used to (a) score the indicators so that the riskiest grants get the highest scores; (b) revise the assessment indicators to meaningfully cover all identifiable risks, including fraud and improper payments; and (c) document decisions on how indicators are selected and weighted. 2. Establish and implement a policy to ensure that all grants expected to be active in a fiscal year, including those awarded after the annual assessment, are assessed for potential risk. 3. Review monitoring protocols, including the level of information collected for oversight of subrecipients’ activities such as criminal history checks, and enhance protocols, as appropriate. 4. Establish activities to systematically evaluate grant monitoring results. 5. Develop and document a strategic workforce planning process. 6. As part of CNCS’s efforts to develop an employee development program, update critical competencies for grant monitoring, and establish a training planning process linked with agency goals and these competencies. We provided CNCS a draft of this report for review and comment. CNCS did not comment on the report’s findings or recommendations, but did provide technical comments, which we incorporated as appropriate. We also incorporated technical comments received from CNCS OIG, as appropriate. We are sending copies of this report to the appropriate congressional committees and the Chief Executive Officer of the Corporation for National and Community Service. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Margie K. Shields, Assistant Director; Dana Z. Hopings, Analyst-in-Charge; Jason S. Palmer; and Sarah M. Martin made key contributions to this report. Alexander G. Galuten, Monica P. Savoy, Ruben Montes de Oca, Kathleen van Gelder, Michael L. Kniss, Amy Sweet, Nicholas Weeks, and James E. Bennett also provided assistance.
|
Created in 1993, CNCS distributes about $750 million in grants annually to volunteer and national service programs for needs ranging from disaster recovery to improving education. A 2014 CNCS Office of Inspector General (OIG) report cited problems with grant management. GAO was asked to review CNCS's efforts to improve its grant monitoring. This report examines (1) CNCS's process for grant monitoring; (2) the extent that this process aligns with relevant internal controls for identifying, analyzing, and responding to risk; and (3) the extent that CNCS has the capacity necessary to monitor grantees' compliance with grant requirements. GAO reviewed agency documents for fiscal years 2015 and 2016; analyzed fiscal year 2015 assessment and monitoring data (the most recent complete year of data available); interviewed agency officials and a nongeneralizable sample of program and grant officers who had experience with grants with negative outcomes, such as greater-than-expected monitoring needs or audit findings; and held discussion groups with a small nongeneralizable number of grantees attending two 2016 training conferences. The Corporation for National and Community Service (CNCS) assesses its grants before the beginning of each fiscal year and prioritizes its grant monitoring based on the scoring of certain indicators, such as potential performance or financial problems and the length of time since the last compliance visit. For fiscal year 2015, CNCS identified about 2,200 grants for assessment and prioritized 16.4 percent for compliance visits and 5.4 percent for other types of visits and financial reviews. In addition, each year CNCS selects a sample of grant records to review for improper payments. CNCS's process for grant monitoring is not fully aligned with the internal controls for identifying, analyzing, and responding to risks (see fig.). Specifically, because CNCS's assessment process does not include all grants, risks may go unidentified. Further, the assessment process uses a scoring model of 19 indicators to analyze and prioritize grants for monitoring visits rather than to identify the highest-risk grants. For example, multiple financial risks are grouped together under one indicator, including for improper payments, and a grant found to have such risks would not be scored as high priority for monitoring based on this indicator alone. In addition, while nearly half of CNCS grant dollars are passed through to other organizations (referred to as subrecipients) and evidence indicates that subrecipient oversight is a key risk area, CNCS's monitoring of grantees' oversight of subrecipients is limited, leaving the agency's response to risk vulnerable in this area. CNCS has not conducted the strategic workforce planning necessary to determine whether it has the people and resources to effectively monitor grantees' compliance with grant program requirements, as key principles for effective strategic workforce planning suggest. CNCS's workforce management activities to address vacancies have been largely ad-hoc, including vacancies in a key office responsible for grant monitoring, at senior levels across the agency, and among program and grant officers. Some of these vacancies reduced the number of fiscal year 2015 monitoring activities conducted. Further, program and grant officers' workloads varied across the agency, and CNCS has not evaluated whether staff have been deployed where they are most needed. Officials said they had not developed a strategic workforce planning process because of limited resources. Without such a process, CNCS's efforts to address workforce challenges may continue to be ad hoc and reactive. GAO is making six recommendations to CNCS, including to ensure that all grants are assessed for risk and that its scoring model prioritizes risk; to review its monitoring protocols; and to develop a strategic workforce planning process. CNCS and CNCS OIG provided technical comments, which were incorporated as appropriate.
|
While the majority of nonprofits individually have relatively small operating budgets, as a whole, the nonprofit sector has a significant presence in the U.S. economy, according to researchers of the nonprofit sector. For example, In 2004, nonprofit organizations that submitted Forms 990 to IRS held an estimated $3 trillion in total assets and received $1.4 trillion in revenues. During the period 1998 through 2002, spending reported by tax-exempt entities was roughly 11 to 12 percent of the nation’s gross domestic product. The tax-exempt sector had over 9.6 million employees, about 9 percent of the civilian workforce in 2002. Wages and salaries paid to nonprofit sector employees comprised 8.3 percent of those paid in the U.S. in 2004. In addition to representing a significant portion of the U.S. economy, the sector is growing. Data indicate that from May 2000 to May 2006, the number of registered public charities has grown over 30 percent from about 646,000 to about 851,000, although organizations that have gone out of existence may be included in those numbers. Other data also suggest growth in the sector. As shown in figure 2, the number of 501(c)(3) organizations completing the Form 990 has almost tripled over the last two decades (from 1986 to 2006) from about 148,000 to about 427,000. Experts have identified several possible contributing reasons for this increase: a shift in recent decades away from government providing most services directly; the expansion of service-related industries in the U.S., of which many nonprofits are a part; deinstitutionalization during the 1960s and 1970s that eliminated large, public care facilities in favor of smaller, community-based organizations, often operated by nonprofit entities; and the trend in devolution in certain policy areas such as welfare, which contributed to a lessening role of the federal government and more localized control in the hands of state, local, and nonprofit organizations. Nonprofit organizations are found in a wide variety of policy areas such as health care, education, and human services, and include many prominent and highly visible community institutions, such as hospitals, museums, job training centers, and churches. (See a list of categories in app. 2.) These organizations also represent a diverse range of sizes. According to the Independent Sector, 73 percent had annual budgets of less than $500,000 in 2004 and only 4 percent had budgets exceeding $10 million. Much of the data on the sector come from the IRS Form 990, but those data have limitations. For example, returned Forms 990 are sometimes incomplete or inaccurate and are not consistently followed up on, and some nonprofit organizations required to submit Forms 990 do not do so. In addition, for certain types of funding, the Form 990 does not distinguish between government and private sources of support. It also does not break out the sources of government grants by federal, state, or local level. We have pointed out in the past the importance of requiring information in a more timely and user-friendly way on IRS Forms 990. Nonprofit organizations bring many strengths to their partnerships with the federal government. Their breadth and diversity allow the sector to address the specific needs of communities and of individuals. Researchers commenting on the advantages of nonprofits point out the provision of benefits in the public interest, often with greater flexibility and access than can be achieved by the public sector. Nonprofits often bring an indepth understanding of a particular geographic area or special population and have access to underserved populations. Nonprofit organizations play a large and increasing role in delivering services traditionally provided by the government, according to researchers. Their research indicates that nonprofit organizations receive significant funds from government sources and that over time these funds have increased. As we previously noted, data are limited but researchers have attempted to analyze data from various sources and identify trends in federal funding to nonprofits. Their work offers a glimpse into the magnitude of federal funds going to nonprofits, but does not provide a comprehensive analysis of the various funding streams. For example: Researchers have reported that the federal government provided about $115 billion directly to nonprofits in fiscal year 2001, the majority of which hospitals received through the Medicare program. Indirect federal funds through state and local governments to nonprofits were an estimated $84 billion, totaling about $199 billion, or about 15 percent of federal payments and grants. Data from other researchers indicate that the federal government spent an estimated $317 billion on nonprofit organizations in fiscal year 2004. Researchers estimate that federal support to nonprofit organizations increased more than 230 percent from fiscal year 1980 to fiscal year 2004 in adjusted dollars. Federal funds reach nonprofit organizations through many paths (see fig. 3). Some flow directly from federal agencies to nonprofit organizations, such as research grants to universities. Some funds flow to states as grants, whose funds may flow to nonprofit organizations, or may flow to local governments that compensate nonprofit organizations for services with those funds. Also, some federal funds move to nonprofits on the basis of individuals’ decisions, that is, from federal programs to nonprofits selected by the consumer, such as for health care. In addition to direct and indirect federal funds, nonprofit organizations benefit from being tax- exempt and also from other tax policies, such as donors’ ability to deduct contributions on their taxes. The current federal oversight of nonprofits is focused on organizations’ tax-exempt status and on specific programs. However, there is less focus on understanding the overall role of nonprofits as implementers of national and federal initiatives, and how to best ensure that nonprofits have the support they need. As we spoke with researchers and practitioners, several issues emerged as needing attention in order to ensure the strength of this important partner to the federal government. We have looked at specific issues involving nonprofit organizations over the years, but our past work was largely related to specific programs. We heard several common issues while taking this more comprehensive look at nonprofit organizations’ interaction with the federal government (see fig. 4). Coordination and collaboration—One theme that surfaced in our preliminary research was the importance and value of coordination and collaboration between nonprofit organizations and government at all levels. As we pointed out in our work on 21st century challenges, the government relies increasingly on new networks and partnerships to achieve critical results and develop public policy, often including multiple federal agencies, non- or quasi-government organizations, for-profit and nonprofit contractors, and state and local governments. A complex network of governmental and nongovernmental entities shape the actual outcomes achieved, whether it be through formal partnerships in grant programs or through independent actions of each addressing common problems. For example, our research on disaster relief efforts following September 11 and Hurricanes Rita and Katrina highlighted the role of nonprofits in providing assistance and the importance of communication and coordination of services with government entities. We pointed out that the scope and complexity of the September 11 attacks presented challenges to charities in their attempts to provide seamless social services for surviving family members and others in need of aid. With regards to the response to Hurricanes Katrina and Rita, we noted that charities could improve coordination among charities and the Federal Emergency Management Agency. We believe that many of the key practices that help enhance and sustain collaboration among federal agencies can be helpful between government and nonprofit organizations, such as when both parties collaborate to define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree upon roles and responsibilities; establish compatible policies, procedures, and other means to operate develop mechanisms to monitor, evaluate, and report the results of collaborative efforts; reinforce accountability for collaborative efforts through plans and reports; and reinforce individual accountability for collaborative efforts through performance management systems. Internal governance issues—A second theme that surfaced in our preliminary research was the need to strengthen governance of nonprofit organizations, a point made by the sector itself as well as by others. At the organization level, a sound governance structure can establish the set of checks and balances that help steer an entity toward result-oriented outcomes consistent with their purposes while also guarding against abuses. Concerns about accountability and transparency of nonprofit organizations have grown in recent years. In 2004 and 2005, the Senate Finance Committee held hearings to look more closely at practices that are illegal or not in keeping with standards typical of the charitable sector, and released a discussion draft of possible solutions. In October 2004, the Independent Sector convened a panel, whose report made several recommendations to address concerns. The panel continues to focus on self-regulation as a way to address these concerns, although there are mixed opinions on the potential success of self-regulation. In addition, several efforts are under way within the sector to raise awareness of ways to improve internal governance of nonprofits, including associations focusing on providing training or consulting, and national certification processes. Capacity—Another area to which researchers suggest attention should be paid is improving the capacity that smaller nonprofit organizations have to address weaknesses in finances, administration, and human capital. Many nonprofits struggling to accomplish their mission on limited budgets lack the resources that could allow them to better manage their finances and strengthen their infrastructure. In addition, particularly in smaller nonprofit organizations, the strengths of board members may be in addressing their organization’s mission, and they may lack legal and financial knowledge or the skills necessary to oversee a nonprofit entity. One specific area identified as needing attention is the development of human capital, as these organizations need to address a complex set of issues, such as competition for service workers, leadership succession, and staff turnover. One promising change is the increase in graduate programs offering a concentration in nonprofit management from 17 in 1990 to 97 in 2001. While there has not been a comprehensive effort by the federal government to improve the capacity of nonprofit organizations, several federal programs provide capacity-building grant funding and technical assistance to nonprofits. Providing assistance to improve capacity may be one area where the federal government could employ a more strategic approach. Nonprofit sector data – As I mentioned earlier, there is a lack of sufficient knowledge on a key federal government partner and its role. Researchers point out that without better data on the nonprofit sector as a whole, appropriate and timely policy decisions regarding nonprofits cannot be made. Some actions under way may improve information on tax-exempt organizations. Beginning in 2008, small tax-exempt organizations that previously were not required to file Form 990 returns, with some exceptions (such as churches) will be required to file a shorter notification form electronically. In July 2007, IRS began mailing educational letters to over 650,000 small tax-exempt organizations that may be required to submit the notice. Further, IRS is seeking comments on a redesigned Form 990, intended to provide a realistic picture of organizations and their operations and to accurately reflect an organization’s operations and use of assets. In addition to the Form 990, other sources of data have also been used to better understand the sector, such as Bureau of Labor Statistics employment data, but continued access to that data has been a problem. In addition, the funds to perform the analysis generally come from the nonprofit sector, and are not consistently available. Administrative and reporting requirements—Practitioners and researchers alike addressed the difficulty that nonprofit organizations, particularly smaller entities, have in responding to the administrative and reporting requirements of their diverse funders. While funders need accountability, the diverse requirements of different funders make reporting a time-consuming and resource-intensive task. Experts report that both government and foundations have increasing expectations that nonprofits conduct performance measurement, but meeting the expectations, given the size of grants and the evaluation capabilities of the staff, can be difficult. One researcher said that practitioners report performance evaluation as one of the biggest challenges they face, given their capacity issues. Fiscal challenges for nonprofits—Nonprofit organizations, particularly smaller entities, often operate with limited budgets and have limited capital. As one researcher noted, the logic of the business world is “upended” with nonprofit organizations. Researchers and practitioners have pointed out that nonprofit organizations often have inadequate funds to invest in management infrastructure and that government and private foundations have not provided them adequate overhead funding to, for example, pay salaries to attract employees with needed skills or upgrade systems that would maximize efficiency. Funders—federal, state and local governments, foundations, and private donors—are willing to pay varying amounts toward overhead, resulting in nonprofit organizations needing to sometimes turn to other sources to cover their overhead costs. We believe this is an area in which more data are needed to fully understand the implications of reimbursement for overhead charges. Virtually every American interacts with the nonprofit sector in his or her daily life through a broad range of concerns and activities such as health care, education, human services, job training, religion, and cultural pursuits. In addition, federal, state, and local governments rely on nonprofit organizations as key partners in implementing programs and providing services to the public. Given the way the sector is woven into the basic fabric of our society, it is essential we maintain and cultivate its inherent strength and vitality and have accurate and reliable data on the overall size and funding flows to the sector. Keys to a healthy nonprofit sector include strengthening governance, enhancing capacity, ensuring financial viability, and improving data quality without overly burdening the sector with unnecessary or duplicative reporting and administrative requirements. At the request of the Congress, we are beginning work to examine these issues further. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have. For further information on this testimony, please contact Stanley Czerwinski at (202) 512-6806. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include David Bobruff, Tom James, Heddi Nieuwsma, Carol Patey, and Tom Short. Tax-exempt organization: An entity determined to be exempt from federal income taxes. Nonprofit status: A state-law concept, in which approved entities may be eligible for exemption from sales, property, and state income taxes. Section 501(c)(3) organization: An organization that has an exempt purpose such as serving the poor; advancing religious, educational, and scientific endeavors; protecting human rights; and addressing various other social problems. IRS Form 990: An IRS information return that many tax-exempt entities, meeting certain requirements, must file annually. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The nonprofit sector is an important means through which public services are delivered and national goals addressed. The federal government increasingly relies on networks, often involving nonprofits that address many issues--health care, education, and human services, for example. Because nonprofit organizations play a key role as partners with the federal government, there is a need to better understand the sector. This testimony (1) provides a picture of the nonprofit sector--its size, composition, and role in the economy; (2) discusses how and why the federal government partners with the sector; and (3) identifies issues related to the sector as a federal partner that need to be better understood. GAO's preliminary work on this topic focused on the intersection of nonprofit organizations and the federal government, including trends, the use of federal funding, and emerging issues. GAO interviewed key experts from relevant associations and academia, reviewed related research, and hosted roundtable discussions with key researchers and practitioners in the nonprofit area. U.S. nonprofit organizations have a significant role both in the economy as awhole and as providers of services. While the majority of nonprofit organizations have relatively small operating budgets, together their impact is large. For example, researchers estimate that the sector's spending in recent years was roughly 11 to 12 percent of the nation's gross domestic product and, in 2002, the sector had over 9.6 million employees, about 9 percent of the civilian workforce. Further, the sector has grown; the number of charitable organizations reporting almost tripled over the last two decades. The federal government increasingly partners with nonprofit organizations as they bring many strengths to these partnerships, such as flexibility to respond to needs and access to those needing services. These organizations receive significant funds from government sources to provide services. Researchers have attempted to quantify these funds. For example, one estimate is that the federal government spent about $317 billion on nonprofit organizations in fiscal year 2004. However, the lack of data makes measuring federal funds to nonprofit organizations difficult. Many funds come through indirect routes, such as through state and local government, adding to the difficulty of determining funding and measuring performance. Although IRS is generally responsible for overseeing the tax-exempt status of these organizations, there is less focus at the federal level on the comprehensive role of nonprofits in providing services using federal funds. Our preliminary look at how the federal government interacts with the nonprofit sector indicates that several policy issues have emerged, for example: (1) Coordination and collaboration--the increasing importance of collaboration between all levels of the government and nonprofit organizations. (2) Internal governance issues--the need to strengthen internal governance of nonprofit organizations. (3) Capacity--the need to improve smaller nonprofit organizations' capacity to address weaknesses in finances, administration, and human capital. (4) Nonprofit sector data--the need for improved data on the sector's size, financial status, and funds from federal sources. (5) Administrative and reporting requirements--the many requirements to be accountable, which while important and necessary, require information in different formats and with increasing complexity. (6) Fiscal challenges for nonprofits--the instability of some nonprofits' financial position. At the request of the Congress, we are beginning work to examine these issuesfurther.
|
Administered by SBA’s Office of Disaster Assistance (ODA), the Disaster Loan Program is the primary federal program for funding long-range recovery for nonfarm businesses that are victims of disasters and is the only form of SBA assistance not limited to small businesses. SBA can make available several types of disaster loans, including two types of direct loans: physical disaster loans and economic injury disaster loans. Physical disaster loans are for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. They are available to homeowners, renters, businesses of all sizes, and nonprofit organizations. These loans are intended to repair or replace the disaster victims’ damaged property to its predisaster condition up to a certain capped amount. Economic injury disaster loans provide small businesses that are not able to obtain credit elsewhere with necessary working capital until normal operations resume after a disaster declaration. The loans cover operating expenses the business could have paid had the disaster not occurred. Not all businesses are eligible for both types of loans. Businesses of all sizes may apply for physical disaster loans, but only small businesses are eligible for economic injury loans. Congress enacted the Small Business Disaster Response and Loan Improvements Act of 2008 to expand steps taken by SBA after Hurricane Katrina and require new measures to help ensure that SBA would be prepared for future disasters. The act includes three provisions requiring SBA to issue regulations to establish new guaranteed disaster programs using private-sector lenders: Expedited Disaster Assistance Loan Program (EDALP) would provide small businesses with expedited access to short-term guaranteed loans of up to $150,000. Immediate Disaster Assistance Program (IDAP) would provide small businesses with guaranteed bridge loans of up to $25,000 from private-sector lenders, with an SBA decision within 36 hours of a lender’s application on behalf of a borrower. Private Disaster Assistance Program (PDAP) would make guaranteed loans available to homeowners and small businesses in an amount up to $2 million. The following section discusses the extent to which SBA met goals for timely processing of business loan applications and factors affecting timeliness; changes SBA made to address processing issues; and challenges that business organizations identified to timely receipt of assistance. Following Hurricane Sandy, SBA did not meet its goal to process business loan applications within 21 days from receipt to loan decision. SBA took an average of 45 days for physical disaster loan applications and 38 days for economic injury applications. The average processing time for business loans peaked in March 2013 (5 months after the storm); business loans for which SBA reached a decision in March 2013 had spent nearly 60 days being processed, on average. One year after the storm, processing times for business loan applications still exceeded 21 days. A backlog of applications that were “in processing” (meaning SBA had not yet made a loan decision) grew rapidly over the course of SBA’s response to the disaster (see fig. 1). SBA said that in the aftermath of Hurricane Sandy, it was challenged by a high volume of loan applications submitted at a faster rate than it had experienced in previous disasters. SBA’s initial estimates of when it would receive applications differed from when it actually received them. To prepare for a disaster, SBA uses assumptions about the volume and timing of the applications it expects to receive based on historical data— known as the “application intake curve.” These assumptions serve as inputs to forecasting models that predict the staff levels necessary to meet processing needs. According to the application intake curve for Hurricane Sandy, SBA estimated that application submission would peak about 7–9 weeks after Sandy. However, as shown in figure 2, SBA began receiving business applications earlier. According to SBA, the early spike in applications occurred because a majority of applications were submitted electronically rather than on paper, which resulted in a large volume of applications within a few days of the disaster. SBA stated that the earlier receipt of electronic submissions was caused by the convenience and speed of the Internet-based application as well as the elimination of postal handling time. While SBA created web-based loan applications to simplify and expedite the application process and encouraged electronic submissions, SBA noted that it did not anticipate receiving such a large volume of electronic loan applications early in its response to Hurricane Sandy. Based on its experience in fiscal year 2012, SBA initially estimated that it would receive between 11,000 and 21,600 business disaster loan applications after Sandy and 36 percent of all applications would be submitted electronically. Following Sandy, SBA received 15,745 business disaster loan applications, and 55 percent of all applications were submitted electronically. At the time of our report, SBA had not updated its key disaster planning documents—the Disaster Preparedness and Recovery Plan and the Disaster Playbook—to adjust for the effects that a Sandy-like surge in early applications could have on staffing, resources, and forecasting models for future disasters. Federal internal control standards state that management should identify risk (with methods that can include forecasting and strategic planning) and then analyze the risks for their possible effect. According to SBA’s Preparedness and Recovery Plan, the primary goals of forecasting and modeling are to predict as accurately as possible the application volume that will result from a disaster and the timing of application receipt. Without taking its experience with early application submissions after Hurricane Sandy into account, SBA risked being unprepared for such a situation in future disaster responses, potentially resulting in delays in disbursing loan funds to disaster victims. We therefore recommended that SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing and resources for future disasters, as well as the risk this impact might pose for timely disaster response. In response to our recommendation, SBA has updated its Disaster Playbook. The changes SBA made include explicit recognition of the effects that high volumes of loan applications early in the response period could have on staffing and loan processing. Our review to determine if these changes addressed our recommendation remains ongoing. Another factor that affected the timeliness of disaster assistance was inaccurate expectations for application processing rates, which caused SBA to delay its decision to increase staff levels. ODA officials said the agency’s processing and disbursement center communicated inaccurate production estimates to ODA headquarters, which led to delays in increasing staff levels to respond to the early influx of applications. ODA officials said that the center’s management projected a loan officer could process an average of 3 home loan applications and 1.5 business loan applications per day, for a combined average of 2.25 disaster loan applications. However, this expectation was not met over the course of the response. Because the estimates were based on production benchmarks established after Hurricane Katrina, ODA officials noted that they relied on the estimates and delayed their decision to increase staff. ODA officials said they later recognized the past rate was not an appropriate indicator of production for Sandy due to factors including differences in the types of businesses affected and the larger number of approved applications. As shown in figure 3, ODA ultimately added loan officers to two agency locations (Buffalo and Sacramento) after the peak months of receipts. We reported in September 2014 that ODA told us it subsequently made several changes regarding communication with the processing and disbursement center and staffing increases. The center was required to produce a new series of daily reports for ODA headquarters to improve communication during future disasters. Specifically, these reports include more detailed information on production rates, number of applications submitted, and size of the application backlog. ODA also created a standard template for requesting and justifying additional staff that included information such as current and expected performance. At the time of our report, SBA also was determining whether it needed to add permanent loan processing staff to offices other than the processing and disbursement center to respond to disasters. To address challenges with providing timely assistance following Hurricane Sandy, SBA made various changes to its loan processing approach, DCMS, and loan officer training. However, as we stated in 2014, because SBA has not received a large volume of applications since Hurricane Sandy, it is too early to determine whether these changes will improve the timeliness of SBA’s response for future disasters similar in magnitude to Sandy. Loan processing approach: SBA used to process loans in the order in which they were received, regardless of whether the applicant was a business or homeowner. After Sandy, SBA received more than four times as many home loan applications as business applications, and these home loan applications were received earlier. As a result, business owners faced delays due to the large number of home loan applications submitted ahead of them. In October 2013, SBA put in place two separate application tracks for home and business loans. DCMS challenges: Over the course of its response, SBA encountered various challenges with DCMS, including server hardware crashes and periods of system latency (slowness and freezing), which added to some delays faced by business owners in receiving disaster assistance. In 2014, we reported that according to SBA, the agency was taking steps to improve DCMS for future disasters. For example, SBA planned to institute a process for updating system equipment (including conducting a baseline inventory and implementing a plan to replace outdated hardware). SBA officials said the inventory had been validated and the plan completed. In addition, SBA officials said the agency made improvements to its DCMS Help Desk, which responds to loan officers who experience system issues. Loan officer training: Most of the additional processing staff, particularly in Sacramento, were new hires, but SBA found that the new officers were not effectively trained to quickly respond to the backlog of business applications. In 2014, we reported that SBA revised its loan officer training for future disasters. For instance, all loan officers had to complete a revised training course for processing business loans. SBA officials also noted that the agency reorganized loan officers into two groups that specialize in processing home and business loans based on the previously mentioned changes to the loan processing approach. Select Small Business Development Centers and local business organizations in New York and New Jersey with which we met in 2014 identified two main challenges (from the perspective of small businesses) that affected the timeliness of assistance: time-consuming loan documentation requirements and lack of SBA follow-up. We reported on steps SBA said it would take to address these two challenges. Nearly all 14 development centers and local business organizations noted that meeting documentation requirements for applications was time- consuming and onerous to business owners. SBA officials said that the agency was taking several steps to streamline the documentation requirements for applicants. Specifically, SBA examined the entire loan application process to identify and eliminate documents that did not help loan officers make a decision on an application. According to SBA officials, the proposed changes to the required documentation were drafted and would be incorporated by the end of 2014 in the disaster loan program’s standard operating procedures. Furthermore, SBA took steps to reduce documentation requirements for applicants with strong credit scores by amending regulations to allow the agency to rely on credit scoring rather than cash flow when determining an applicant’s ability to repay. More than half of the entities with which we met said that business owners noted a lack of SBA contact after submitting their applications, and many owners were unaware of the status of their application throughout the process, including whether or not it had been received at the processing center. Additionally, five entities noted a lack of continuity with loan officers or case managers over the course of the application process. Two of these five entities said that some business owners had had up to eight different loan officers or case managers. In addition, these five entities reported that submitted documentation and information were lost when loan officers and case managers changed. According to SBA officials, due to the physical damage caused by Hurricane Sandy, it was difficult for loan officers and case managers to contact applicants by telephone or e-mail despite their efforts. SBA officials told us that an applicant might have more than one loan officer or case manager for several reasons, such as when application numbers increased or if current loan officers or case managers had to supervise newer staff. SBA officials also told us that some documents could be misplaced due to the multiple ways applicants could submit information to the processing and disbursement center. In addition, some documents may not have been misplaced; rather, they may not yet have been entered into DCMS and thus were unavailable for loan officers to view. According to SBA officials in 2014, efforts to process electronic application submissions more effectively would address these issues. The officials said SBA expected to create an electronic portal that would share information with applicants on the status of their applications and documents received, thus increasing transparency and communication during the loan application process. As explained previously, for our 2014 report we compared SBA’s approval, withdrawal, and cancellation rates for business loans after selected disasters. In comparison with the other disasters, the approval rate after Sandy was not consistently higher or lower, but withdrawal and cancellation rates were consistently higher. Approval rates. The approval rate for business loan applications for Hurricane Sandy (42 percent) was lower than for Hurricanes Katrina, Rita, and Wilma, higher than for Hurricane Ike, and comparable to the rate for Hurricane Irene. However, when taking home loan applications into account, Sandy resulted in the highest total approval rate (53 percent) in comparison to the five other disasters. The primary reasons for which SBA declined business loan applications after each of the disasters remained the same: lack of repayment ability and unsatisfactory credit history. Following Hurricane Sandy, SBA received 14,938 business loan applications and declined 5,663 as of January 31, 2014. Of the declined applications, SBA cited lack of repayment ability as at least one of the reasons on 2,644 applications (47 percent), and unsatisfactory credit history as at least one of the reasons on 2,317 applications (41 percent). Withdrawals. Application withdrawal rates were higher after Sandy than after the other disasters. Of the 14,558 original business loan applications that had reached a decision status by January 31, 2014, 4,715 (approximately 32 percent) had been withdrawn by SBA or the applicant. The withdrawal rates for the previous disasters ranged from approximately 18 percent (Ike) to approximately 23 percent (Katrina and Wilma). For Hurricane Sandy, SBA withdrew approximately 60 percent of the 4,715 applications, while applicants requested withdrawal for the remaining 40 percent. The 60 percent figure for SBA-initiated withdrawals was higher than for two of the other disasters and lower for three. The leading reason for withdrawals after Sandy was the applicant’s failure to provide SBA with all requested information (1,542 withdrawals or approximately 33 percent of all withdrawn applications). Cancellations. Of the 4,180 business loan applications SBA approved for Hurricane Sandy, 1,578 (38 percent) had been cancelled as of January 31, 2014—a rate higher than for the other disasters. The other cancellation rates ranged from approximately 22 percent (Wilma) to approximately 30 percent (Ike and Katrina). Of the business loans cancelled after Hurricane Sandy, borrowers requested cancellation of 1,171 loans (74 percent), while SBA cancelled 407 (26 percent). The most common reason for SBA-initiated cancellations was “failure to complete and return all loan closing documents,” representing 336 cancellations (21 percent). According to SBA, factors affecting the withdrawal and cancellation rates for Hurricane Sandy included higher rates of insurance coverage in the footprint of the disaster area and the availability of alternative sources of recovery aid (such as grants). Officials told us that the rollout of programs funded by the Department of Housing and Urban Development’s Community Development Block Grant program began earlier than in past disasters, and that state grantees—specifically New Jersey and New York—obtained those funds and accepted applications for their respective state grant programs shortly after the disaster struck. Half of the entities with which we spoke—selected business development centers and local business organizations in New Jersey and New York— provided perspectives on the most common reasons why applications were withdrawn after Sandy. For instance, business owners commonly withdrew applications because they had changed their plans for funding their recovery (for example, they may have received insurance claim proceeds or state grants). Entities also noted other reasons, such as frustration with waiting times for loan processing and a desire not to incur additional debt. In 2014 we reported that 6 years after Congress passed the Small Business Disaster Response and Loan Improvements Act of 2008, SBA had not piloted or implemented three guaranteed disaster loan programs, which therefore had not been available after Hurricane Sandy. As previously discussed, the act mandated the creation of the Immediate Disaster Assistance Program (IDAP), the Expedited Disaster Assistance Loan Program (EDALP), and the Private Disaster Assistance Program (PDAP). According to SBA officials, the agency opted to implement IDAP first, because the loan limit was lower than in the other two programs and SBA received appropriations to pilot this program. We had examined SBA’s implementation plans before 2014. In a July 2009 report, we noted that SBA was planning to implement requirements of the 2008 act, including pilot programs for IDAP and EDALP. SBA requested funding to carry out requirements for the two programs in the President’s budget for fiscal year 2010 and received subsidy and administrative cost funding of $3 million in the 2010 appropriation, which would have allowed the agency to pilot about 600 loans under IDAP. The agency issued regulations for IDAP in October 2010. In May 2010, SBA told us that its goal was to have the pilot for IDAP in place by September 2010. We concluded that because the implementation process already was behind schedule, it would be important for SBA to ensure it had a plan to implement remaining requirements of the 2008 act and report on its progress to Congress. We therefore recommended that SBA develop an implementation plan and report to Congress on progress in addressing all requirements within the act and include milestone dates for completing implementation and any major program, resource, or other challenges the agency faced. However, as of August 2014, the pilot program for IDAP had not yet started. According to SBA officials, the program had not been implemented for two primary reasons: (1) information technology challenges and (2) feedback from lenders indicating that program requirements might hinder lender participation. First, the electronic systems that would be used to process IDAP applications did not interface smoothly. According to SBA officials, IDAP’s readiness was in part based on the ability of E-Tran, the loan processing system for the 7(a) program, to interface with DCMS, the loan processing system for the Disaster Loan Program. Officials said that a new information technology system was being developed—SBA One. They also said that for IDAP application processing, it would be more efficient to make DCMS interoperable with the new system than to enhance E-Tran. At the time of our 2014 report, SBA anticipated that SBA One would be operational by early 2015. Second, SBA told us that it received feedback from lenders on challenges that could discourage lenders from participating in the program, but documentation of the feedback was limited. In March 2010, SBA organized a forum with 11 lenders in the Gulf Coast to obtain their views on IDAP. Lenders stated the program had to have a simple eligibility determination and confirmation that a potential borrower had applied for an SBA disaster loan before the lender would approve an IDAP loan. Lenders also expressed concerns about the possibility of guarantee denials if an applicant did not take out an SBA disaster loan. According to SBA, in 2010 the agency also conducted conference calls with Iowa lenders who expressed similar concerns about IDAP. However, SBA did not document either the Gulf Coast forum or the conference calls at the time of the events. Instead, SBA officials relied on the memory of staff present for these discussions. In response to our request for information on these efforts, in July 2014 SBA provided a one-page summary. The summary included a list of the Gulf Coast lenders but not of Iowa lenders, and the discussion of lenders’ concerns was minimal. In addition, according to SBA officials, in November 2012 the agency solicited informal feedback from lenders in Hurricane Sandy-affected areas about the usefulness of IDAP and its features. According to SBA officials, lenders were concerned about the statutory requirement that provides an applicant a minimum of 10 years to repay the IDAP loan if a loan through the Disaster Loan Program was not approved. Lenders expressed disinterest in servicing a small loan amount (up to $25,000) for a term that long. SBA officials noted that lenders typically did not offer small-dollar loans such as those made under IDAP. SBA’s IDAP regulations allow a lender to charge a borrower an optional application fee to recoup some of the loan processing costs, but the one-time fee may not exceed $250 and an IDAP lender generally may not charge a borrower any additional fees. According to SBA officials, they also did not document lender feedback from this outreach effort. SBA officials told us that they obtained feedback on IDAP requirements from three banks, although officials could recall the identity of only one bank. In July 2014, SBA officials told us that the agency was still trying to conduct the IDAP pilot by attempting to identify solutions to increase lender participation. However, officials noted that the lenders with which they met were not willing to participate in IDAP (or an IDAP pilot) without changes to the statutory servicing term and the SBA regulatory fee. Based on lender feedback, SBA officials said that the current statutory requirements, such as the 10-year loan term, made a product like IDAP undesirable and that lenders were not likely to participate in IDAP unless the loan term were decreased to 5 or 7 years, for example. Congressional action would be required to revise statutory requirements, but SBA officials said they had not discussed the lender feedback with Congress. SBA officials also told us the agency planned to use IDAP as a guide to develop EDALP and PDAP, and until challenges with IDAP were resolved, did not plan to implement these two programs. As a result of not documenting, analyzing, or communicating lender feedback, SBA might have lacked reliable information to guide its own actions and to share with Congress about what requirements should be revised to encourage lender participation. Such information could be obtained by conducting further outreach to lenders and documenting this outreach in accordance with federal internal control standards, which state that all transactions and other significant events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. We concluded that not sharing information with Congress on challenges to implementing IDAP might perpetuate the difficulties SBA faced in implementing programs intended to provide assistance to disaster victims. Therefore, in September 2014, we recommended that SBA conduct a formal documented evaluation of lenders’ feedback to inform both itself and Congress about implementation challenges and about statutory changes that might be necessary to encourage lenders’ participation in IDAP, and then report to Congress on the challenges SBA faced in implementing IDAP and on statutory changes that might be necessary to facilitate implementation. SBA officials recently provided us with a two-page summary of a discussion conducted with 23 lender and service provider participants in SBA’s 7(a) program—17 bank lenders, 3 certified development companies, and 3 lender service providers—at a National Association of Government Guaranteed Lenders conference in October 2014. Participants were provided general information on IDAP, and were asked to comment on specific statutory and regulatory requirements related to loan terms, maximum allowable interest rates, and restrictions on lender- imposed application fees. According to SBA’s summary, participants expressed unwillingness to participate in a program with these requirements. While SBA thus has taken one step to solicit and document lender feedback, it has not adopted a plan for the steps the agency will take to implement IDAP (and by implication, the other two loan programs) or to reach a determination on whether IDAP or the other loan programs should be implemented. Chairman Chabot and Ranking Member Velázquez, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Marshall Hamlett (Assistant Director), Vaughn Baltzly (analyst-in- charge), John McGrail, and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
With an estimated $67 billion in damage, Hurricane Sandy (October 2012) was the costliest Atlantic storm since Katrina in 2005. SBA administers the Disaster Loan Program, which provides physical disaster loans (to rebuild or replace damaged property) and economic injury loans (for working capital until normal operations resume) to help businesses and homeowners recover from disasters. This testimony discusses (1) the timeliness of SBA's disaster loans, (2) loan approval, withdrawal, and cancellation rates for selected previous disasters; and (3) the extent to which SBA implemented loan programs mandated by the Small Business Disaster Response and Loan Improvements Act of 2008. This testimony is based on GAO's September 2014 report ( GAO-14-760 ) on SBA assistance to small businesses after Sandy. For that report, GAO analyzed SBA data on application processing, reviewed documentation related to SBA's planning, reviewed relevant legislation and regulations, and interviewed SBA officials. GAO provides updates on steps SBA has taken to implement GAO's recommendations. Following Hurricane Sandy, the Small Business Administration (SBA) did not meet its timeliness goal (21 days) for processing business loan applications. From receipt to loan decision, SBA averaged 45 days to process physical disaster loans and 38 days for economic injury loans. SBA did not expect early receipt of a high volume of loan applications, and delayed increasing staffing—which in turn increased processing times. As of September 2014, SBA had not revised its disaster planning documents to reflect the effects that application volume and timing could have on staffing, resources, and forecasting models for future disasters. Federal internal control standards state that management should identify risks and take action to manage them. Without taking its post-Sandy experience with application submissions into account in its disaster planning documents and analyzing the potential risks posed for timely response, SBA might be unprepared for similar situations in future disasters, which could delay getting loan funds to disaster victims. In June 2015, SBA provided GAO with an updated version of one disaster planning document—the Disaster Playbook—which includes discussion of early application volume and references to updated staffing models. GAO's review of these changes is ongoing. In comparison with the five disasters that generated the most SBA disaster loan applications since 2005, the loan approval rate after Sandy was not consistently higher or lower, but the application withdrawal and loan cancellation rates (32 percent and 38 percent, respectively) were consistently higher than other disasters. SBA approved 42 percent of business loan applications after Sandy, a rate lower than for Katrina, Rita, and Wilma, higher than for Ike, and comparable with that for Irene. For Hurricane Sandy and for previous disasters, SBA primarily declined business loan applications because of applicants' lack of repayment ability and credit history. As of June 2015, SBA had not implemented the guaranteed disaster loan programs Congress mandated in 2008, including the Immediate Disaster Assistance Program (IDAP)—a bridge loan program in which private-sector lenders would provide disaster victims with up to $25,000 and an SBA decision within 36 hours of a lender's application on behalf of a borrower. In 2014, SBA officials told GAO they were trying to implement IDAP but had received some feedback from lenders that some program requirements—such as a statutory minimum 10-year loan term under certain circumstances—might discourage lender participation. SBA had not conducted a formal documented evaluation of lender feedback to establish what implementation challenges the agency might face and to determine what, if any, statutory changes Congress could consider. Without an appropriately documented evaluation of lender feedback, SBA might not have reliable information with which to inform its own actions and its reporting to Congress about challenges with implementing the programs. In June 2015, SBA provided GAO with documentation of additional outreach performed in October 2014, where lenders provided specific feedback regarding current statutory requirements and proposed program requirements. SBA has yet to adopt a plan for how and whether it will proceed with IDAP implementation or document the challenges it would face in implementing the program. Therefore, SBA has not reported to Congress on these issues. In September 2014, GAO recommended that SBA revise its disaster planning documents, conduct a formal documented evaluation of lenders' feedback on IDAP, and report to Congress on challenges to implementing the program. SBA has since taken steps to revise its planning documents and received and documented some lender feedback, but has not reported to Congress.
|
This section describes the manner in which the U.S. military is organized to carry out its missions, how the military uses contractors to perform many essential services during contingency operations, and the existing air quality in Afghanistan and Iraq. The U.S. command structure in each nation has evolved over time. To perform its military missions around the world, DOD operates geographic combatant commands that conduct activities within assigned areas of responsibility. Combatant commanders oversee U.S. military operations that take place within their area of responsibility. CENTCOM extends from the Middle East to Central Asia, including Afghanistan and Iraq. In Afghanistan, American forces fought as part of the International Security Assistance Forces (ISAF), a multinational strategic unit. The Combined Joint Task Force (CJTF), which was subordinate to ISAF, was responsible for the command and control of operations throughout Afghanistan. In 2009, the U.S. troops’ designation became U.S. Forces-Afghanistan (USFOR-A). According to administration estimates, as of September 2010, about 104,000 American troops, including 30,000 reinforcements that were announced in December 2009, were deployed in Afghanistan. The United States plans to begin withdrawing troops from Afghanistan in July 2011. American forces fighting in Iraq originally came under a similar dual command structure. Multinational Forces-Iraq (MNF-I) was the strategic component. It housed a multinational staff that included logistics, procurement, intelligence, combat operations, and engineering, among other things. The engineering staff, with input from health officials, had responsibility for developing the policies that governed the management of solid waste in Iraq. In addition, Multi-National Corps-Iraq (MNC-I) constituted the operations component of the Iraq command structure. It, too, had a multinational staff that roughly paralleled the MNF-I staff, although it focused more on day-to-day operational issues. On January 1, 2010, MNF-I and MNC-I merged to form U.S. Forces-Iraq (USF-I). By August 31, 2010, about 65,000 American combat troops will have withdrawn from Iraq, reducing U.S. troop levels to about 50,000. The United States’ presence in Iraq is scheduled to end no later than December 31, 2011. The U.S. military relies on civilian contractors to provide supplies and services, including managing some burn pits, in support of its contingency operations in Afghanistan and Iraq. Kellogg, Brown, and Root (KBR) has provided burn pit services in Iraq through the Logistics Civil Augmentation Program (LOGCAP) III contract. On April 18, 2008, DOD announced the Army had awarded LOGCAP IV contracts to DynCorp International, Fluor Intercontinental, and KBR. The transition of requirements from the LOGCAP III to the LOGCAP IV contracts is ongoing and will be used for combat support services in Afghanistan, including burn pit management. KBR retains responsibility for burn pit support in Iraq, as well as a role in aiding the transition of LOGCAP III to LOGCAP IV in Afghanistan. Typically, contractors such as KBR, DynCorp, and Fluor work under task orders. The task order process begins when a military customer, such as a commander in Afghanistan or Iraq, identifies a need, such as assistance in managing a burn pit. This need is documented in a task order statement of work, which establishes the specific tasks for the contractor, and the time frames for performance. In the case of contracting for burn pit support, the customer contacts its contract program management office (the contract office), which obtains a cost estimate from a contractor and provides the cost information to the customer. If the customer decides to use the contractor’s services, the contract office obtains funding and finalizes the statement of work, and the contracting officer issues the task order and a notice to begin work. If the customer identifies a change in need, the process begins anew. Additionally, the military services, as well as DCMA, perform contract management functions to ensure the government receives quality services from contractors at the best possible prices. Customers identify and validate the requirements to be addressed and evaluate the contractor’s performance, and ensure that the contract is used in economical and efficient ways. The contracting officer is responsible for providing oversight and management of the contract. The contracting officer may delegate some oversight and management functions to DCMA, which may then assign administrative contracting officers to (1) provide on-site contract administration at deployed locations, and (2) to monitor contractor performance and management systems to ensure that the cost, product performance, and delivery schedules comply with the terms and conditions of the contract. DCMA administrative contracting officers may have limited knowledge of field operations. In these situations, DCMA normally uses contracting officers’ technical representatives who have been designated by their unit and appointed and trained by the administrative contracting officer. They provide technical oversight of the contractor’s performance, but they cannot direct the contractor by making commitments or changes that affect any terms of the contract. Air pollution in Afghanistan and Iraq is generally high. For example, the level of particulate matter is higher in Afghanistan and Iraq than in the United States. Particulate matter includes coarse particles between 2.5 and 10 micrometers in diameter, as well as fine particles smaller than 2.5 micrometers. Particle pollution may contain a number of components, including acids, organic chemicals, metals, and soil or dust particles, according to the Environmental Protection Agency (EPA). The size of particles is directly linked to their potential for causing health problems. Both coarse and fine particles pass through the throat and nose and enter the lungs. Fine particles can also become deeply embedded in lung tissue. Health problems associated with particle pollution identified by EPA include irritation of the airways, coughing, or difficulty breathing; decreased lung function; aggravated asthma; development of chronic bronchitis; irregular heartbeat; nonfatal heart attacks; and premature death in people with heart or lung disease. According to DOD, sources of particulate matter include dust storms, dust from vehicle disturbance of the desert floor, emissions from local industries, and open pit burning conducted by Afghans, Iraqis, and American troops. Since the beginning of hostilities in Afghanistan (2001) and Iraq (2003), the military has relied heavily on open burn pits to dispose of the large quantities of solid waste generated at its installations, but CENTCOM did not develop comprehensive guidance on operating or monitoring burn pits until 2009, well after both conflicts were under way. Furthermore, our site visits and review of contractor documentation found that burn pit operators did not always comply with this guidance. In addition, DOD health officials said that many items now prohibited from burn pits, such as plastics, have been routinely burned at U.S. military bases from the start of each conflict. Prior to 2004, the military used burn pits exclusively to handle waste disposal needs in Afghanistan and Iraq. Beginning in 2004, the military began to introduce alternative waste disposal methods. For example, according to DOD officials, between 2005 and 2010, there was a large increase in the number of operational solid waste incinerators in both countries. We discuss incineration issues and other alternatives to open pit burning in more detail in the next section of this report. Nonetheless, as of August 2010, burn pits remained an important waste disposal method for the U.S. military in both wars. According to DOD officials, the military’s reliance on open burn pits is primarily the result of their expedience, especially in the early phases of both wars when combat operations were most intense. Although senior DOD officials said virtually every U.S. military installation in both countries has used burn pits, it is difficult to determine the number of burn pits in use at any given time. First, CENTCOM does not routinely collect such data. In fact, to respond to our request for information, CENTCOM had to query individual base commanders to determine the number of burn pits in each country. In addition, the exact number of active burn pits in both countries varies with fluctuations in the number of bases. As U.S. troops leave Iraq and additional troops arrive in Afghanistan, these totals change. In November 2009, CENTCOM reported 50 active burn pits in Afghanistan and 67 in Iraq. However, by April 2010, those numbers had changed to 184 and 52, respectively. By August 2010, there were 251 active burn pits in Afghanistan and 22 in Iraq. Bases in both countries also vary substantially in their size, resulting in varying amounts of solid waste. For example, large bases may house 5,000 or more U.S. servicemembers, as well as U.S. civilian contractors, while a patrol base may house only about 150 troops. Relatively small bases, such as patrol bases, are likely to rely on open burning for their solid waste disposal needs. Major bases, such as Bagram (Afghanistan) and Balad (Iraq), may employ alternatives, such as incinerators, to handle a substantial portion of their solid waste disposal. Although DOD has long recognized the dangers of open pit burning—June 1978 waste management guidance states that U.S. personnel should not burn solid waste unless there is no other alternative, in part because of the environmental dangers it poses—CENTCOM and its subordinate commands did not provide comprehensive instructions on managing and operating burn pits or minimizing these dangers until 2009. Earlier guidance was largely limited to noting the inherent dangers of open burning and suggesting that various alternatives—such as landfills and pollution prevention—be used instead. For example, an Army Technical Bulletin on Guidelines for Field Waste Management, dated September 2006, notes that troops should use open burning only in “emergency situations,” because it can lead to “significant environmental exposures.” However, this bulletin provides only minimal guidance on employing open burning in emergency situations to lessen the acknowledged risks associated with open burning and avoid exposing U.S. servicemembers, civilian contractors, and local nationals to those risks. According to a former senior military engineer stationed in Baghdad from 2005 to 2006, the lack of specific burn pit guidance may have been, at least in part, because the command structure in Iraq did not have the engineering expertise on-hand to develop such guidance, and because it was not clear organizationally which command unit––engineers or health professionals––was responsible for developing such guidance. As a result, MNC-I policies and procedures did not emphasize solid waste management. When MNC-I established a dedicated engineering staff in 2005, it began developing more comprehensive environmental policies for Iraq, with advice from the Army Public Health Command. According to the former senior military engineer, the dedicated engineering staff included about 100 engineers, with about 20 to 30 staff—including one environmental specialist—focusing on environmental guidance for Iraq. One of their points of emphasis was to develop limited instructions for operating burn pits. In 2006, the engineering staff developed environmental policies to cover each of the environmental issues of concern, including hazardous and solid wastes, among other things. CENTCOM issued these policies as fragmentary orders (FRAGO) to U.S. forces operating in Iraq. The solid waste FRAGO included limited guidance on burn pit operations. These FRAGOs were consolidated into a single document entitled MNC-I Environmental Standard Operating Procedure 2006 that discouraged the use of burn pits as a method of waste disposal. The development of this guidance also advanced some environmental practices, such as the segregation of waste to facilitate reuse and recycling efforts. However, MNC-I Environmental Standard Operating Procedure 2006 did not include comprehensive policies for operating or monitoring burn pits. In April 2009, MNC-I revised the 2006 guidance to standardize procedures for environmental compliance and to provide environmental guidance to U.S. forces and their support units, including civilian contractors, operating in Iraq. MNC-I Environmental Standard Operating Procedure 2009 provides specific guidance for the handling of solid waste during contingency operations, including emphases on source reduction, waste minimization, and recycling as the most appropriate means of handling solid waste. It also describes burn pits as an expedient means to destroy solid waste during contingency operations. However, the guidance notes that open burning is explicitly forbidden unless the base commander authorizes it in writing. In addition, it provides guidance on siting burn pits, securing them, managing burn pit ash, and overseeing open burning, among other things. In particular, it details waste items prohibited from destruction in burn pits, including hazardous waste, batteries, tires, electronics, and appliances, among other things. In September 2009, USFOR-A issued guidance to provide overarching environmental direction and best management practices for use during contingency operations in Afghanistan, including specific instructions for operating burn pits. According to senior military officers, the issuance of this guidance coincided with the arrival of a Joint Force Engineer Command in Afghanistan. Consistent with earlier waste disposal guidance, including MNC-I Environmental Standard Operating Procedure 2006 and 2009, USFOR-A guidance stipulates that open burning is the least preferred method of solid waste disposal and that troops should use it only until they can develop more suitable capabilities. According to this USFOR-A guidance, the ultimate goal for enduring bases in Afghanistan is to transition to composting and recycling, to nearly eliminate the need for all forms of incineration, including burn pits. Further, this guidance states that, while U.S. forces may use burn pits early in contingency operations as an expedient way to control waste, “open burning will not be the regular method of solid waste disposal.” It also establishes several criteria to control and manage the air emissions associated with burn pit operations, including general guidelines for burning and a list of prohibited items. Some of the USFOR-A prohibited items mirror those from MNC-I. For example, both lists include hazardous waste, oils, and tires. However, USFOR-A guidance also includes pesticide containers, asphalt shingles, treated wood, and coated electrical wires, among other things, not specifically listed in the MNC-I guidance. The MNC-I guidance requires plastics to be segregated for recycling, while the USFOR-A guidance explicitly bans plastics from burn pits. Also in September 2009, CENTCOM issued Regulation 200-2 to provide environmental guidance and best management practices for U.S. bases in CENTCOM’s area of responsibility during contingency operations. The regulation provides U.S. military and civilian personnel detailed guidance for managing environmental concerns, such as hazardous materials, regulated medical waste, spill response, and solid waste, among other things. According to CENTCOM officials, the regulation provides the minimal acceptable standards for solid waste disposal, including burn pit operations, for all U.S. bases under its command including those in Afghanistan and Iraq. The regulation applies to all CENTCOM elements engaged in contingency operations throughout CENTCOM’s area of responsibility, including all servicemembers, DOD civilians, and DOD contractors. Generally, the regulation’s requirements are more stringent than the nation-specific guidance contained in MNC-I and USFOR-A. For example, the regulation excludes more items from burn pits than the MNC- I or USFOR-A standard operating procedures. According to CENTCOM officials, one of the main reasons for developing its 2009 regulation was to codify and expand the burn pit requirements in MNC-I Environmental Standard Operating Procedure 2009 and USFOR- A Standard Operating Procedure 2009. CENTCOM officials said that a CENTCOM regulation carries more weight and, thus, is more easily enforced than subordinate commands’ standard operating procedures. Further, CENTCOM’s 2009 regulation states that subordinate command guidance may be used when base commanders deem “additional environmental guidance” necessary to “supplement” the regulation. As such, subordinate command guidance provides commanders in Afghanistan and Iraq flexibility to increase waste disposal requirements to meet unique needs in their respective areas of operation, as long as they meet the minimum direction in the regulation. In October 2009, Congress enacted the National Defense Authorization Act (NDAA) for Fiscal Year 2010. Section 317 of the act requires DOD to prescribe regulations prohibiting the disposal of covered waste in open-air burn pits during contingency operations except in circumstances in which the Secretary of Defense determines that no alternative disposal method is feasible. In March 2010, in response to section 317 of the NDAA, DOD issued Directive-type Memorandum (DTM) 09-032 prohibiting the disposal of covered waste in open-air burn pits during contingency operations except when the relevant commander of a combatant command makes a formal determination that no alternative disposal method is feasible. According to DTM 09-032, once the relevant field commander makes such a determination, the commander must forward the determination in writing to the Under Secretary of Defense for Acquisition, Technology, and Logistics so that it arrives within 15 calendar days of making the determination. The Under Secretary is to submit the determination to the Senate and House Armed Services Committees within 30 days of the commander’s decision. The commander must also provide a justification to the Under Secretary to continue open-air burning for each subsequent 180-day period during which the base plans to burn covered waste in burn pits. The Under Secretary must also forward these justifications to the Senate and House Armed Services Committees. The DTM 09-032 exception process may appear to institute less stringent controls over open-air burning than CENTCOM’s 2009 regulation because it allows such burning when commanders deem it necessary, while the regulation does not authorize the disposal of prohibited items in burn pits under any circumstances. However, a senior DOD official said despite the prohibitions in CENTCOM’s 2009 Regulation, information gathered from field commanders led him to conclude that disposal of prohibited items in burn pits had become routine at many bases in Afghanistan and Iraq. According to this senior official, the DTM 09-032 exception process may provide incentives for field commanders to seek and employ alternatives to burn pits rather than have them attempt to justify continued burning. As of July 2010, no field commanders in Afghanistan or Iraq had sought permission to burn covered waste in burn pits. According to a senior DOD official, the DTM is a worldwide policy that applies to all DOD components, including CENTCOM. As a result, CENTCOM must comply with DTM 09-032 and to the extent CENTCOM’s 2009 regulation does not conflict with the DTM, any additional measures in the regulation. The DTM directive for a commander of a combatant command to make a formal determination that there is no feasible alternative to disposing of covered waste in a burn pit and the associated congressional notification applies only to wastes covered under the DTM. However, burn pit management in CENTCOM’s area of responsibility must adhere to both documents. Thus, for example, CENTCOM’s 2009 regulation’s list of items prohibited from burn pits remains in effect, even though it is not identical to the list of covered wastes in the DTM. Table 1 compares the key elements of burn pit guidance developed by MNC-I, USFOR-A, CENTCOM, and DTM 09-032 that are relevant to the issues Congress identified in NDAA section 317. DOD and CENTCOM officials, as well as senior military officers, acknowledged that U.S. forces have not always adhered to relevant guidance, and that prior to 2009, many items CENTCOM’s 2009 regulation now prohibits from burn pits, including regulated medical waste, hazardous waste, and substantial quantities of plastic, were routinely disposed of in burn pits. However, according to these officials, options for waste disposal, other than burning, were limited early in both wars. This was particularly true when combat operations were under way, as troop safety and mission success outweighed environmental concerns. DOD officials said that, as threat levels decreased, the military began working to replace burn pits with more environmentally sound methods of waste disposal. Between January and March 2010, we determined that, to varying degrees, the four burn pits we visited at bases in Iraq—one operated by military personnel and three operated by contractor personnel—were not managed in accordance with CENTCOM’s 2009 regulation. For example, we determined that operators at all four of these burn pits burned varying amounts of plastic—a prohibited item that can produce carcinogens when burned. For example, Al Asad appeared to have only trace amounts of plastic in its burn pit. At Warhorse, despite some limited waste sorting efforts, a burn pit operator said they did not segregate plastic from the waste stream. We found similar variability in the bases’ use of incinerators. For example, Al Asad and Taji had solid waste incinerators in operation to supplement their burn pits, but Marez and Warhorse did not. Although all four bases had programs in place to sort waste prior to burning in an effort to avoid burning prohibited material, or to remove anything that could be used against U.S. forces, Al Asad and Taji devoted more resources to sorting waste than Marez and Warhorse. This variability in meeting the key health protection provisions of the CENTCOM 2009 regulation means many U.S. personnel—military and civilian—may face greater risks from burn pit emissions in their day-to-day activities. Table 2 provides our analysis of each base’s adherence to CENTCOM’s 2009 regulation health-related burn pit provisions. The variability in implementation of CENTCOM’s 2009 regulation at the bases we visited stems from several causes. First, environmental officials at one of the four Iraq bases we visited—Warhorse—said they were unaware of the regulation and its requirements for burn pit operations. The two servicemembers who managed the Warhorse burn pit said they used a standard operating procedure document provided to them when they began managing the burn pit in August 2009. According to one of the servicemembers, the main purpose of this guidance was to direct their dealings with contractors delivering waste to the burn pit. Without an awareness or understanding of relevant guidance, burn pit operators are severely limited in their ability to minimize the risks of exposure to potentially harmful burn pit emissions. Second, adherence to the regulation and other guidance is difficult, according to DOD officials, because many of the supplies arriving on U.S. bases are either made of, or packaged in, materials that are prohibited from burn pits. For example, drinking water arrives in plastic bottles, shrink wrapped in plastic. We discuss procurement issues in more detail later in this report. Third, the contractor operating the burn pits at two bases we visited did not have contracts reflecting current guidance. According to a senior representative of this firm, the MNC-I Environmental Standard Operating Procedure 2006 is the guidance referenced in its burn pit contract. Thus the company provided Iraq burn pit management activities in the context of that guidance, which contains less stringent requirements than the CENTCOM 2009 regulation. According to the contractor’s representative, the company prepared plans, which DOD reviewed and approved, based on the MNC-I 2006 guidance. However, DOD officially requested the contractor incorporate MNC-I Environmental Standard Operating Procedure 2009 into its operations. According to Army contracting specialists, such contract modifications are typically long and tedious, often requiring months of negotiations. As of June 2010, DOD and the contractor had yet to finalize this update, at least in part because the contractor believed the new guidance would require activities beyond the scope of existing task orders. Finally, another reason for the differences in implementation of the regulation is disparities in the resources devoted to burn pits and in the commitment shown by base commanders and environmental officers. For example, all four of the burn pits we visited had programs to sort incoming waste to avoid burning of prohibited items and to remove anything that could be used against U.S. forces. However, the amount of resources devoted to this activity varied substantially. At Al Asad, for example, a commissioned officer oversaw all burn pit and incinerator activities. At this base, an Iraqi contractor under U.S. servicemembers’ supervision sorted waste before it went into the burn pit, segregating certain waste for recycling, such as large plastics, metals, wood, mattresses, rubber, and reusables (such as furniture). This process required a crew of 15 to 20 people and took all day. Some sorting also occurred before waste arrived at the burn site. For example, contractor personnel sorted dining facility waste at the dining facility; then, wet waste went directly to the landfill and recyclables went directly to the recycling area. Essentially, only dry and combustible materials, such as wood and paper, went into the Al Asad burn pit, although according to the officer-in-charge, there were a few instances when small amounts of prohibited items, such as plastic, slipped through and were burned. In contrast, at Warhorse, a warrant officer oversaw the burn pit with a staff of five enlisted servicemembers. Warhorse did not employ local contractors to assist in sorting the daily waste. As a result, according to the warrant officer in charge, sorting the base’s solid waste each day was a challenge. While they attempted to sort and segregate the waste each day, the warrant officer in charge said the job was simply too large for five people. They had no machinery or equipment with which to move the waste, so they performed a cursory visual inspection. Further, the official said that the staff had other responsibilities at the burn site; therefore, they sorted waste for only about 2 hours per day. Our visit to Al Asad demonstrated that strong leadership and adequate resources can enhance a base’s ability to meet the provisions of CENTCOM’s 2009 regulation, and thereby help protect personnel from exposure to potentially harmful burn pit emissions. For example, the commissioned officer in command of Al Asad’s burn pit is an environmental engineer, professionally trained for the task. None of the staff in charge of the other three burn pits we visited had such training. In addition, with the local contractor’s staff, servicemembers at Al Asad had ample personnel on site to meet most of the regulation’s provisions, including the implementation of the waste disposal alternatives. Alternative waste management practices, such as source reduction, recycling, incinerators, and land filling, are alternatives for managing DOD’s wartime waste stream, decreasing its volume and potential toxicity, and reducing the potential health impacts of burn pits at U.S. bases in Afghanistan and Iraq. However, DOD has not evaluated the benefits and costs of these waste management alternatives relative to its existing practices, leading to a lack of key information to manage its solid waste. Source reduction and recycling—also referred to as waste minimization— and the use of incinerators and landfills are alternatives for managing the waste stream, decreasing its volume and potential toxicity, and reducing the potential health impacts of burn pits. Senior DOD officials and guidance we reviewed described a successful approach to solid waste management as first characterizing the waste stream to identify its contents and volumes of materials and then evaluating ways to integrate these waste management alternatives. DOD guidance discourages long- term use of burn pits and encourages the use of incinerators and landfills instead. CENTCOM’s 2009 regulation and Army Regulation 200-1 provide definitions of waste management alternatives. Source reduction, which differs from recycling, is defined as any practice reducing the amount of contaminants entering the waste stream. Recycling is the process by which materials, otherwise destined for disposal, are collected, reprocessed or remanufactured, and eventually reused. CENTCOM’s Regulation 200-2 defines an incinerator as any furnace used in the process of burning solid or liquid waste for the purpose of reducing the volume of the waste by removing combustible matter with emissions passing through a stack, duct or chimney. A solid waste landfill is defined as a discrete area of land or an excavation used to dispose of non-hazardous waste. Table 3 illustrates the solid waste management practices implemented at U.S. bases in Iraq at the time of our visit. Although DOD has partially characterized the waste stream at Bagram, Kandahar, and Camp Victory, it has not fully characterized the waste stream at any of its bases in either Afghanistan or Iraq as outlined in Army technical guidance. DOD has also been slow in implementing waste management alternatives because other logistical and operational priorities took precedence over environmental programs, according to CENTCOM officials. Specifically, DOD has not widely implemented practices such as source reduction and recycling at its bases in either country, despite the fact that units subject to the MNC-I and USFOR-A Environmental Standard Operating Procedure issued in 2009 were strongly encouraged to implement such practices. Source reduction involves more than base command decisions; it also includes procurement policies and processes that encompass a broad and complex cast of DOD logistics and acquisition communities. Yet many of the materials from DOD’s supply chain that end up in DOD’s waste stream may adversely impact base commanders’ efforts to minimize waste, especially waste that CENTCOM’s 2009 regulation prohibits in burn pits. For example, in March 2010, CENTCOM officials said USF-I tasked a contractor to begin evaluating ways to reduce the amount of solid waste generated at base dining facilities in Iraq, such as plastic utensils, plates, and containers. The content of these materials is incompatible with DOD’s guidance on burn pit requirements because of the large volume of plastic that remains in the waste stream. However, no decisions to limit procurement of these materials and reduce this waste had been made as of July 2010. DOD’s recycling practices at its bases in Afghanistan and Iraq were also limited and primarily involved large scrap metals. Our site visits to the four U.S. bases in Iraq found that only Al Asad recycled both aluminum and plastic materials in addition to scrap metal. CENTCOM officials and military personnel said that both Afghanistan and Iraq lacked markets for plastic and other recyclable materials, and military officers at one base we visited in Iraq said plastic materials from some U.S. bases in Iraq were transported to Kuwait and Lebanon for recycling. However, our review found that such markets may exist in Iraq. For example, military personnel at Al Asad said that aluminum and plastic were purchased by a Iraqi contractor and sold for profit in Iraq. Further, a May 2010 USF-I recycling plan called for initiating recycling contracts at seven bases in Iraq in support of USF-I’s plan to eliminate the use of burn pits in Iraq. These contracts are to include the recycling of aluminum, appliances, cardboard, plastic and wood materials and were expected to be implemented in September 2010, according to USF-I officials. USF-I officials reported that recycling these additional materials will reduce solid waste generated at U.S. bases by 30 percent, supporting a USF-I goal to eliminate the use of burn pits in Iraq by December 31, 2010. Table 4 identifies materials recycled at U.S. bases in Iraq as of June 2010. U.S. bases in Afghanistan have not developed recycling programs to the extent that such programs have been developed in Iraq. Larger bases in Afghanistan, such as Bagram Air Field and Kandahar Air Field, have implemented recycling programs for plastic bottles, aluminum cans, cardboard, paper, steel, wood, and other plastics such as flatware and cereal cups, according to USFOR-A reports. However, USFOR-A officials said that there is little recycling occurring at its other bases because they are often located in remote areas lacking an infrastructure to support markets for recycled materials. CENTCOM officials said that it is often easier to burn waste than to implement an efficient recycling program, which would include managing a sorting facility, sorting the solid waste, locating markets for recycled products, and having trained environmental officers at a base. As mentioned above, DOD has begun relying more heavily on incinerators as an alternative to burn pits. For example, between 2005 and 2010, the number of solid waste incinerators installed in Iraq under LOGCAP grew from 2 to 39. In Afghanistan, the number increased from 1 to 20 between 2003 and 2010. According to DOD officials, incinerators are the best combustive alternative to open burn pits because of their (1) enclosed combustion chambers that provide a more complete burn, (2) ability to reduce large volumes of waste, and (3) ability to handle multiple waste streams. However, despite the more controlled process for burning waste, incinerators may also produce potentially harmful emissions. There are three main types of incinerators: solid waste, regulated medical waste, and hazardous waste incinerators. Burn boxes, a type of incinerator device designed for wood waste materials, are also used at some locations. However, burn boxes differ from solid waste incinerators because they do not contain a dual combustion chamber or a stack for dispersing emissions and are not designed for solid waste, such as food or plastic. Figure 5 illustrates a solid waste incinerator. DOD officials reported challenges using incinerators in Afghanistan and Iraq, stating that incinerators were expensive and posed acquisition, logistical, and operational challenges. Regarding acquisition, DOD purchased more than 40 solid waste and medical waste incinerators for U.S. bases in Afghanistan and Iraq between 2003 and 2005. However, according to senior DOD officials, approximately 100 construction projects initiated under LOGCAP III were suspended by DOD in 2005, including the installation of 11 incinerators in Iraq, because DOD identified a lack of internal spending controls on LOGCAP III projects. This led to incinerators remaining uninstalled at bases in Iraq for approximately 5 years, until March 2010 when the USF-I engineer command ordered the installation of the 11 incinerators by July 2010. As of August 2010, there were 39 solid waste incinerators installed in Iraq, according to LOGCAP data. Two of the four bases we visited in Iraq had solid waste incinerators on-site, all of which were supported by LOGCAP. At Taji, solid waste incinerators began operation in February 2009, and at Al Asad, solid waste incinerators began operation in April 2009. According to CENTCOM officials, once the United States’ presence in Iraq ends, all solid waste incinerators will be transferred to the government of Iraq. Logistically, challenges included the transportation of incinerators, the availability of land to install them, and the life-expectancy and size of a base, which fluctuates, according to senior DOD officials. For example, in Afghanistan, CENTCOM officials said that incinerators arrived by ship in Pakistan and were loaded onto contractor vehicles for delivery to U.S. bases. CENTCOM officials also reported that the lead time to get an incinerator to a U.S. base in Afghanistan ranged from 6 to 8 months, and that there were operational concerns as well. For example, once an incinerator arrived, it had to be inspected, set up, and operated by trained personnel. CENTCOM officials said that there is generally a training program for operating personnel to complete before operations begin. In addition, DOD officials said that U.S. military servicemembers did not operate incinerators, and that operations were left primarily to contractors. Senior DOD officials said that many bases conduct incinerator operations 24 hours a day. In early 2010, USFOR-A developed plans to use incinerators at its bases in Afghanistan and, as of June 2010, there were 20 solid waste incinerators operational and 46 awaiting installation, as well as 11 burn boxes that were operational and 2 awaiting installation. DOD data also show that 114 additional solid waste incinerators should arrive incrementally in Afghanistan by the end of calendar year 2010. The types of incinerators installed at bases in Afghanistan differ from those at bases in Iraq; they are smaller, with burn rates ranging from 1 to 20 tons per day, and most are portable. The portability provides USFOR-A commanders with the flexibility to relocate incinerators as bases close or as generated waste capacities fluctuate. In Iraq, our site visits found that incinerators and burn boxes were not always operated according to CENTCOM’s 2009 regulation and instead were operated according to the MNC-I guidance documents issued in 2006 and 2009. The incinerators at Taji were operated by a LOGCAP contractor under the MNC-I Environmental Standard Operating Procedure 2006. However, the MNC-I Environmental Standard Operating Procedure 2006 does not include specific guidance on incinerator operation and maintenance, prohibited items, or methods for testing and disposing of incinerator ash. Though not required under the 2006 guidance, military personnel at Taji reported that preventive medicine personnel routinely tested the incinerator ash and submitted samples to the Army Public Health Command for laboratory analysis, assessment, reporting, and data archiving. At Al Asad, we observed that incinerators were operated in accordance with MNC-I Environmental Standard Operating Procedure 2009, which provides additional guidance on incinerator operation and maintenance, prohibited items, and methods for testing and disposing of ash. USFOR-A officials and a DOD environmental plan completed in March 2009 reported that burn boxes in Afghanistan are used to combust various types of solid waste, including wet waste and wood products. Burn boxes were designed to burn certain, but not all, wood products. However, CENTCOM’s 2009 regulation provides that incinerators and burn boxes must be used in accordance with the manufacturers’ instructions. For example, the DOD environmental study reported that burn boxes at Bagram Air Field were used to combust hundreds of tons of solid waste per day from January to July 2008. The use of burn boxes to combust solid waste conflicts with recommendations made by the CENTCOM Surgeon and the Army Public Health Command that burn boxes be replaced with incinerators designed for solid waste. The recommendation by the Army Public Health Command was the result of an environmental assessment of burn boxes at Camp Bondsteel, Kosovo, in 2001, which determined that the burn boxes used to combust wet waste and plastic materials produced air emissions that exceeded the long-term military exposure guidelines for coarse particles and concluded that burn boxes should be replaced with appropriate incinerators designed for solid waste. Landfills can facilitate the use of incinerators by providing an alternative disposal option for certain items that hinder efficient combustion and providing a location for disposal of incinerator ash. For example, landfills are used at larger U.S. bases in Afghanistan and Iraq to dispose of solid waste, including ash from incinerators as well as various non-combustible items such as dining facility waste. Senior DOD officials said that disposing of dining facility waste in landfills removes the waste from burn pits and incinerators, which improves combustion. For example, military personnel at Al Asad said that dining facility waste was diverted to a landfill and reported that both the incinerators and the burn pit improved their burn efficiency as a result. In addition, DOD officials reported that larger bases also diverted the overflow of solid waste—initially sent to incinerators—to a landfill because the amount of solid waste generated at larger bases exceeded the incinerators’ capacity. However, challenges with landfills include the availability of land, high water tables, scavenging, and the need for proper lining to prevent waste materials from seeping into surrounding soil and possibly contaminating ground water, according to DOD officials. Three of the four bases we visited in Iraq used a landfill to dispose of solid waste, although only Al Asad used a lined landfill. In April 2010, as part of its requirements under the National Defense Authorization Act for Fiscal Year 2010, DOD reported to Congress that during military operations, open air burning will be the safest, most effective, and expedient manner of solid waste reduction until current research and development efforts produces better alternatives. DOD officials added that burn pits are the most cost-effective waste management practice and that incinerators are the best combustive alternative. However, DOD has not evaluated the benefits and costs of the waste management alternatives and compared them with the benefits and costs of its existing practices or taken into account all the relevant cost variables, including the environmental and long-term health impacts that burn pits could have on servicemembers, civilians, and host country nationals. We discussed the costs of burn pits and solid waste incinerators with DOD contract management officials, military officers in both countries, and other DOD officials to determine the extent to which DOD has analyzed these costs. We determined that DOD does not have complete information on costs to procure, install, operate, and maintain incinerators during contingency operations. In addition, DOD has not comprehensively analyzed alternative waste management practices, including the short and long term costs associated with their use. For example, overall cost figures are not readily apparent in the information submitted by LOGCAP contractors because burn pit and incinerator costs are combined with other waste management costs, by site, and because these data are not centrally managed or tracked. Although the military can request that contractors break out burn pit and incinerator cost data to facilitate cost analysis, no such analyses have been completed. Without comprehensive cost data and analysis, DOD does not have a sufficient basis to conclude that burn pits are the most cost-effective waste management practice or that incinerators are the best alternative to the use of burn pits. DOD officials said that, during wartime, environmental planning, including the management of waste, is not always a high-priority because of the operational and logistical pressures, safety and security risks, and the overall lack of resources available initially to manage waste. Furthermore, DOD officials reported that base planning and resource investment decisions are difficult, including planning and implementing resources to manage waste, because bases are in constant flux during wartime operations. USFOR-A and USF-I have not established systems to sample or monitor burn pit emissions, as directed by CENTCOM’s 2009 Regulation. While systems to monitor burn pits have not been established, preventive medicine and other personnel collected ambient air samples on many bases, some of which have active burn pits. However, in part because DOD and VA lack information on burn pit emissions and individuals’ exposure to burn pits, the potential health impacts of burn pit emissions on individuals are not well understood. Neither USFOR-A nor USF-I systematically samples burn pit air pollutants, as directed by CENTCOM’s 2009 regulation. Among other things, this regulation directs the establishment of systems to sample or monitor pollutants emitted from burn pits and incinerators and the documentation of potential exposures. Further, when burn pit sampling shows high levels of certain pollutants, the regulation directs relevant officials to determine the cause and identify solutions. Additionally, the regulation identifies substances that should be considered for sampling from burn pits at least yearly. These substances and the health risks they pose as described by EPA or the Agency for Toxic Substances and Disease Registry include: Carbon monoxide—an odorless gas produced from burning various fuels that can cause dizziness, confusion, nausea, fainting, and death, if exposed to high levels for long periods of time, according to EPA. Dioxins—a class of chemicals that result from combustion and have been characterized by EPA as likely to cause cancer. Particulate matter 10 and 2.5—coarse and fine particle pollution described earlier. Polycyclic aromatic hydrocarbons—a group of chemicals that result from incomplete burning and can cause cancer in humans from long-term exposure through breathing or skin contact, according to the Agency for Toxic Substances and Disease Registry. Hexachlorobenzene—a chemical by-product classified by EPA as a probable human carcinogen that may also damage the liver and cause skin lesions. Volatile organic compounds (VOC)—gases emitted from paints, solvents, fuels, and other products that, according to EPA, may cause eye, nose, and throat irritation; headaches, loss of coordination, and nausea; and damage to the liver, kidneys, and central nervous system. Some VOCs are also suspected or known to cause cancer in humans, according to EPA. Since 1978, DOD has recognized that burning waste in open pits is not environmentally acceptable. Some DOD guidance, such as DOD Instruction 6490.03 (2006) and the Joint Staff Memorandum MCM 0028-07 (2007), established provisions to identify and assess all potential occupational and environmental hazards, including documenting and characterizing the risks associated with potential environmental exposures. However, these documents preceded CENTCOM’s 2009 regulation and do not specifically direct U.S. forces to establish systems to sample or monitor burn pit pollutants. Regarding monitoring, officials with CENTCOM and the Army Public Health Command (APHC)—one of three service health surveillance centers that provide support and technical guidance to USFOR-A and USF- I on environmental sampling—said, from a technical standpoint, monitoring burn pit emissions during contingency operations may not be possible, practical, or generally warranted from the standpoint of characterizing health risks. They noted the health risk assessment process requires ambient monitoring data at the locations where people are exposed to all hazards, regardless of source, and sampling only at locations proximate to burn pits would not meet this need. Nevertheless, the CENTCOM regulation specifically directs the establishment of a sampling or monitoring system to sample or monitor pollutants emitted from burn pits, and to document potential exposures. In describing the status of monitoring efforts and related challenges, a senior DOD official said historic and current policy and guidance did not provide adequate details to ensure U.S. forces systematically collect burn pit emissions data in either country. APHC officials also said the regulation’s monitoring provisions parallel U.S. domestic environmental regulations, which focus on monitoring and ensuring compliance with specific thresholds for various pollutants. However, the military does not approach emissions monitoring from that perspective. Rather, the military conducts exposure-based monitoring; that is, monitoring at locations where personnel may be exposed. To assess the potential health risk due to such exposures, the military uses Military Exposure Guidelines (MEG) which do not provide absolute limits on servicemembers’ exposure to specific substances. MEGs are chemical concentrations representing estimates of the level above which certain types of health effects may begin to occur in some individuals after continuous exposure for the duration reflected by the MEG. Thus, MEGs provide guidelines for various exposure time frames and health effect severity levels based on the concentration of chemical substances detected during ambient, or outdoor, air monitoring. According to DOD technical guidance, MEGs are an important tool to assist preventive medicine or other trained personnel in evaluating estimated levels of risk to servicemembers from chemical exposures during deployments. APHC officials said that instead of establishing systems to monitor burn pit emissions, ambient air monitoring should have been required. Such information, according to the officials, could provide information on the overall air quality to which servicemembers are exposed, including emissions from burn pits. APHC officials said that when CENTCOM’s 2009 regulation was being drafted, they advised CENTCOM officials that compliance monitoring of burn pits would be difficult to implement, but that their feedback was not incorporated in the final version of the regulation. Given the disconnect between the sampling methodology proposed by APHC and the requirements included in the CENTCOM regulation, it is unclear whether the appropriate sampling will be done to collect data needed to understand servicemembers’ potential exposure to burn pit emissions and to identify and minimize potential health risks to servicemembers. While systems to monitor burn pit pollutants directly have not been established, preventive medicine and other personnel in Afghanistan and Iraq collected thousands of ambient air samples from at least 293 locations to conduct occupational and environmental health assessments, among other things. APHC officials said ambient air samples were collected from areas where routine servicemember exposure was anticipated. APHC officials said in some instances, samples were collected near burn pits if servicemembers were continually located in the area. Although samples may be taken near the burn pit, APHC officials said it was difficult to determine whether the pollutants collected originated from burn pits or another source, such as windblown soil, auto exhaust, or nearby industrial sources. As a result, ambient air monitoring alone cannot establish burn pits’ contribution to air quality problems. After ambient air samples are collected, they are sent to APHC for laboratory analysis and inclusion in the Defense Occupational and Environmental Health Readiness System (DOEHRS), an information system that stores environmental monitoring data, among other things. According to APHC officials, the specific substances and siting of the air samples collected vary by location, depending on factors such as the size of the base, potential environmental hazards, the personnel available to collect samples, and the professional judgment of the personnel involved in the sampling. If the concentrations of certain substances cause concern, preventive medicine personnel may recommend additional monitoring. Further, if a known environmental hazard, such as a burn pit, is present, APHC officials said that sampling may be adjusted to reflect the type of emissions expected from the potential hazard. For example, we reviewed air sampling data from Taji and Warhorse that the Army Center for Health Promotion and Preventive Medicine (now called APHC) collected in 2008 to help gauge the occupational and environmental health risk associated with deployments at these bases. The substances sampled at these bases differ substantially from one another. In our analysis of DOEHRS data provided in July 2010, we determined that since 2002, 2,285 ambient air samples were collected in Afghanistan, and since 2003, 5,723 ambient air samples were collected in Iraq. Figures 6 and 7 provide information on the number of ambient air samples collected in each country by year. In both countries, the largest number of ambient air samples were collected in 2009. In Afghanistan, the number of ambient air samples collected in 2009 was nearly twice the number of samples collected in 2008. In Iraq, more ambient air samples were collected in 2009 than any other year, although the difference between 2008 and 2009 was only 19 percent. Each ambient air sample may include various numbers and types of substances. The substances collected include volatile organic compounds, metals, and particulate matter. Other substances, such as polycyclic aromatic hydrocarbons and pesticides, were also collected. At the bases we visited in Iraq, the collected substances included metals and particulate matter. These substances partially correspond to the list of potentially harmful substances that CENTCOM’s 2009 regulation suggests sampling. Our analysis of the DOEHRS data also determined that several substances listed in CENTCOM’s 2009 regulation were infrequently collected, or not collected at all. For example, we determined that dioxins were collected at only two locations in Afghanistan and only one location in Iraq. According to APHC officials, there were several reasons for sampling dioxins infrequently. For example, APHC officials said this was because specially trained personnel are needed to collect those samples, the equipment used to collect the samples requires continuous power and meeting those power needs in contingency areas is difficult, and laboratory analysis of dioxin samples can cost several thousand dollars per sample. Additionally, APHC officials said that the results of a health risk assessment conducted at Joint Base Balad did not show levels of dioxins that would suggest further sampling was needed at other locations. We also determined that carbon monoxide—another substance the CENTCOM regulation states should be considered for monitoring around burn pits—was not sampled in either Afghanistan or Iraq. According to an APHC official, the instrument needed to collect ambient carbon monoxide samples is sophisticated, expensive, and requires specially trained personnel to operate. Additionally, the only instrument in CENTCOM’s area of responsibility was in Kuwait, although DOD said it was procuring additional carbon monoxide monitors for use in Afghanistan and Iraq. The results of ambient air sampling by APHC showed approximately 6.6 percent of the 30,516 tests for substances from the samples collected in Afghanistan exceeded relevant 1-year MEGs. In Iraq, approximately 3.9 percent of 111,647 of such tests showed exceedances of relevant 1-year MEGs. According to APHC officials, exceeding a 1-year MEG in one sample or periodically over time does not necessarily imply that the servicemembers at that location will suffer negative health impacts because the MEGs were designed to protect against continuous exposures of up to 1 year in duration. Tables 5 and 6 provide the number of MEG exceedances by country and the substances sampled, and show that levels of fine and coarse particles almost always exceeded 1-year MEGs. Importantly, fine particles—which can become deeply embedded in lung tissue and are associated with numerous health conditions described above—were the substance that most often exceeded the MEG. Figures 8 and 9 illustrate the distribution of fine particle test results relative to the MEG, and show that many test results from sampling in each nation exceeded the MEG by a substantial margin. DOD does not systematically collect detailed information regarding individual servicemembers’ burn pit exposure. Similarly, VA does not focus on collecting or tracking health outcomes associated with exposure to burn pits. In the absence of data and information on burn pit emissions and individuals’ burn pit exposure, the potential health impacts of burn pit emissions on individuals are not well understood. According to DOD guidance, it is the military’s responsibility to document and evaluate occupational and environmental health hazards during deployments, which includes accomplishing specific health surveillance activities before, during, and after deployments. Such surveillance includes identifying the population at risk through questionnaires and blood and other samples and recognizing and assessing potentially hazardous health exposures and conditions, among other things. Table 7 provides examples of the military’s health surveillance activities. Servicemembers may document exposure to burn pit emissions in several ways. For example, their responses to questions in post-deployment health questionnaires, which have a question related to environmental exposures, can establish a possible exposure to such emissions. In addition to health surveys, servicemembers may report any health issue they think resulted from an environmental exposure, including burn pits, to their military medical provider for documentation in the servicemembers’ medical record. However, these surveillance efforts do not collect data on specific individuals’ level of exposure to burn pit emissions. Senior DOD officials said that systematically collecting data on individual level exposures would require servicemembers to wear a collection device—which they said is beyond current technological capability. Senior VA officials said its efforts to properly care for veterans and handle their claims would be enhanced if DOD collected more individual, or population-level, data on exposure to burn pits. According to senior VA officials, such data are needed to understand the link between environmental exposures and health outcomes. According to VA officials, there are no VA health surveillance activities that focus on collecting or tracking health outcomes associated with veterans’ potential exposure to burn pits. According to a senior VA official, its surveillance of emerging health issues is driven by concerns veterans report at its healthcare centers. Veterans’ potential exposure to burn pits may be documented through encounters with the VA health care system when veterans receive acute or routine medical care. However, enrollment in VA health care is optional, and not all veterans choose to participate. Additionally, veterans who served in Iraq or at locations that support Operation Iraqi Freedom may report concerns regarding environmental exposure, including to burn pits, through the Gulf War Registry. The registry is a data system established after the first Gulf War to identify possible diseases resulting from military service in areas of Southwest Asia. Participation in the registry is voluntary, and not all Gulf veterans choose to participate. Additionally, VA officials said they were developing a survey, which it will administer to about 60,000 randomly selected veterans in 2010, that seeks to identify health concerns among Operation Enduring Freedom and Iraqi Freedom veterans and will provide veterans with an opportunity to report any concerns they have regarding environmental exposures, including burn pits. VA officials said they expect the survey’s results to be available in 2011. The U.S. Army Center for Health Promotion and Preventive Medicine (now the Army Public Health Command) and the Air Force Institute for Operational Health (now the U.S. Air Force School of Aerospace Medicine) jointly conducted the studies of Joint Base Balad, described earlier, in response to concerns expressed by servicemembers about the possible health impacts of their exposures to burn pit emissions and to gain a better understanding of the situation at Balad. As noted above, we express no view in this report on the Balad studies because of ongoing litigation. Other studies have been initiated in response to concerns over servicemembers’ exposure to burn pit emissions expressed by Congress, the VA, and DOD leadership. For instance, in October 2009, the Acting Deputy Assistant Secretary of Defense for Force Health Protection and Readiness directed the Armed Forces Health Surveillance Center (AFHSC) to assist in efforts to understand the health effects associated with exposure to burn pit smoke by conducting additional epidemiological studies. In response to this directive, AFHSC expects to release a report in fall 2010 that presents the findings of several studies on burn pit exposure. One of these studies will compare acute and long-term health care utilization among servicemembers deployed to Korea, at one of four locations within CENTCOM, and the health care utilization of never- deployed servicemembers based in the continental United States. The outcomes the study will examine include: post deployment visits with medical staff for respiratory, circulatory and cardiovascular disease, ill-defined conditions, and sleep apnea; self-reported responses on post-deployment health assessments forms; visits with medical staff for respiratory conditions while deployed in the CENTCOM area of responsibility. AFHSC is using data from DOD’s Defense Medical Surveillance System and the Theater Medical Data Store, a medical information system that provides access to servicemembers’ battlefield medical treatment records, among other things. As another part of AFHSC’s fall 2010 report, the Naval Health Research Center (NHRC) will compare health outcomes in servicemembers who were exposed to burn pits at Joint Base Balad, Contingency Operating Base Speicher, and Camp Taji; and servicemembers who had not been exposed to burn pits. The health outcomes this study will examine include: birth outcomes in offspring of military personnel; chronic and newly reported respiratory symptoms and conditions; chronic multisymptom illness; and the incidence of newly reported lupus and rheumatoid arthritis. Regarding the first health outcome, NHRC will use data from DOD’s Birth and Infant Health Registry, which collects data to establish the prevalence of birth defects and evaluate the associations of various birth outcomes with specific exposures, such as deployment, among infants born to military families. NHRC will also rely on data from the Millennium Cohort Study to examine the three other health outcomes. The Millennium Cohort Study is an ongoing DOD evaluation of the long-term health impacts of military service and has 140,000 participants who are active duty and Reserve or Guard servicemembers. In addition, officials from the APHC, U.S. Air Force School of Aerospace Medicine, the Navy and Marine Corps Public Health Center, and Naval Health Research Center are collaborating on an environmental health air surveillance plan to better understand the health risks of burn pits to servicemembers at specific locations in Afghanistan and Iraq. According to APHC officials, the purpose of the environmental health surveillance plan is to help quantify health risks from the air quality at particular locations with burn pits, but is not intended to provide a definitive determination of the burn pit-specific contribution to the overall health risk or to generate data to predict the future health of individual servicemembers. In July 2010, DOD officials said that prospective locations in Afghanistan have been selected for the environmental health surveillance plan. APHC officials said they anticipate implementing the environmental health surveillance plan at the selected locations in early 2011. After implementing the environmental health surveillance plan and adjusting it based on lessons learned, APHC officials said the plan could be adapted to other locations. Finally, according to senior VA officials, the VA commissioned the Institutes of Medicine to study and issue a report by spring 2011 on the potential health impacts of burn pit exposure. As of June 2010, the scope of the Institute of Medicine study had not been defined. However, in its charge to the Institute of Medicine, the VA encouraged the Institute to examine the impacts of burn pits throughout Afghanistan and Iraq. The Department of Defense and its forces in Afghanistan and Iraq have increased their attention to solid waste management and disposal in both conflicts in recent years, including issuing comprehensive guidance on burn pit operations and pursuing some alternatives, such as installing incinerators at some bases. However, burn pits remain a significant waste disposal method in each conflict and the overall incidence of exposure of service personnel, contractors, and host country nationals to burn pits and any related health outcomes is unclear. This is largely because of the expedience of burn pits, a lack of awareness of current guidance, and the fact that some contracts for burn pit operators do not reflect the most recent guidance. Furthermore, the fact that DOD and its forces in Afghanistan and Iraq have not implemented a more comprehensive air sampling and monitoring plan leaves DOD and other affected stakeholders without the benefit of potentially useful information on emissions that could help in characterizing risks from burn pit emissions and possibly determining whether pollutants detected in ambient monitoring stem from burn pits or other sources. Progress in implementing this plan and better understanding any health risks from burn pits has been hindered by unresolved concerns among Army public health officials about the feasibility of adhering to CENTCOM’s provisions for burn pit sampling and monitoring. In addition, by not characterizing its waste stream to identify its contents and opportunities for decreasing its toxicity and volume, DOD lacks information necessary to better incorporate waste minimization alternatives such as source reduction and recycling. Finally, while DOD has made limited progress in implementing alternatives to open pit burning, such as the installation of incinerators and new recycling contracts, it has not analyzed the feasibility or benefits and costs of alternatives relative to those of its current practices. As a result, DOD lacks the information it needs to make informed decisions about waste management practices that efficiently and effectively achieve public health objectives. To help DOD decrease environmental health risks to service personnel, contractors, and host country nationals, GAO is making six recommendations to the Secretary of Defense. Specifically, GAO recommends that the Secretary of Defense direct U.S. forces in Afghanistan and Iraq to: Comprehensively implement relevant guidance related to burn pit management and operations. Review and update contracts for burn pit operations to ensure that they reflect the most recent guidance. Monitor burn pits in accordance with current guidance or, if current guidance needs revision or is insufficient, direct CENTCOM to consult with the Office of the Secretary of Defense and other relevant parties to revise or develop the necessary guidance. Analyze the waste stream generated by U.S. forces in each conflict and seek to identify opportunities for using materials that are less hazardous when burned and strategies for minimizing waste. Improve their adherence to guidance on solid waste management practices and further pursue waste prevention through the re-use and recycling of materials. Analyze the relative merits—including the benefits and costs—of alternatives to open pit burning, taking into account important considerations such as feasibility and the potential health effects of open pit burning. We provided a draft of this report to the Department of Defense and the Department of Veterans Affairs. In its written response, included as appendix II, DOD said that it concurred with five of the six recommendations and partially concurred with the recommendation that the Secretary of Defense direct U.S. forces in Afghanistan and Iraq to monitor burn pits in accordance with current guidance. In commenting on the report, DOD said that guidance for burn pit operations affects all combatant commands—not just U.S. Central Command—and that Central Command and the Army Public Health Command should consult with the Office of the Secretary of Defense if current guidance for monitoring burn pits requires revision. We agree with involving the Secretary of Defense in any such changes to guidance for monitoring burn pits and revised the recommendation accordingly. DOD also provided technical comments, which we addressed as appropriate. The Department of Veterans Affairs said they appreciated the opportunity to comment on the draft and had no comments. We are sending copies of this report to the appropriate congressional committees, Secretaries of Defense and Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report addresses the following objectives: (1) determine the extent to which the U.S. military installations in Afghanistan and Iraq have used open pit burning and adhered to guidance governing their use; (2) identify alternatives to open pit burning and the extent to which the Department of Defense (DOD) evaluated these alternatives; and (3) determine the extent to which U.S. forces have monitored the air quality, exposures, and potential health impacts of burn pit emissions in accordance with relevant guidance. To address the first objective, we reviewed relevant DOD guidance and U.S. military base records from 2001 to April 2010. From January to March 2010, we also visited four burn pit sites in Iraq—Al Asad, Marez, Taji, and Warhorse—to determine the degree to which burn pit operators adhered with guidance governing the use of burn pits at those sites. We observed burn pit operations and interviewed military officials, preventive medicine personnel, and contractors at each site visited. In addition, we reviewed inspection reports conducted by the Defense Contract Management Agency for each of the four sites. We considered several factors when selecting the locations of our site visits, such as the number of personnel at each installation, whether the burn pit was managed by the military or a contractor, whether an incinerator was present, and our ability to safely access the location. Our findings from the site visits are not generalizable to the other bases we did not visit. We also attempted to observe burn pit operations in Afghanistan, using the U.S. Central Command’s most recent list of active burn pits to select several potential sites, including Bagram Air Base, among others. In December 2009 when we arrived at Bagram to conduct observations, U.S. military personnel told us the burn pit was closed. However, we later learned this information was incorrect, as the Bagram burn pit remained operational until February 2010. Because of this and because of security and logistical issues, we were unable to observe burn pit operations in Afghanistan. To address the second objective, we reviewed DOD guidance and planning documents on current and future uses of alternatives to open pit burning, DOD waste disposal studies, and relevant literature. We also observed burn pit alternatives during our site visits in Iraq and discussed these alternatives and their potential for future use with DOD officials and contractors. In addition, we interviewed DOD officials in the United States regarding alternatives to burn pits in Afghanistan and Iraq, locations where the U.S. military uses such alternatives, and the trade-offs of using alternatives. To address the third objective, we analyzed data from the Defense Occupational and Environmental Health Readiness System on ambient air sampling in Afghanistan and Iraq conducted from 2002 through 2010. We assessed the reliability of these data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we analyzed DOD air sampling, health risk characterization, and health surveillance documents; as well as documents from the Department of Veterans Affairs (VA), which provides healthcare and other benefits to veterans and their families, on health surveillance efforts. We also interviewed DOD officials regarding air sampling efforts and officials from VA and DOD regarding efforts to study the potential health impacts of burn pit emissions. Lawsuits have been filed in federal court in at least 43 states in which current and former servicemembers have alleged, among other things, that a contractor’s negligent management of burn pit operations, contrary to applicable contract provisions, exposed them to air pollutants that subsequently caused serious health problems. The contractor has moved to dismiss the suits, arguing, among other things, that it cannot be held liable for any injuries that may have occurred to service personnel because all its burn pit activities occurred at the direction of the military. We express no view in this report on any issue in this pending litigation involving burn pits. Moreover, because of the pending litigation, we did not evaluate whether the contractor has complied with the terms of its contract with respect to burn pit operations. We conducted this performance audit from September 2009 to October 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Michael Hix (Assistant Director), Johana Ayers, John Bumgarner, Seth Carlson, Carole Coffey, Timothy Di Napoli, Phillip Farah, Quindi Franco, Cindy Gilbert, Melissa Hermes, Justin Jaynes, Richard Johnson, Joy Myers, Alison O’Neill, Mark Pross, Minette Richardson, Kiki Theodoropoulos, and Eugene Wisnoski made key contributions to this report.
|
From the start of military operations in Afghanistan and Iraq, the U.S. military and its contractors have burned solid waste in open burn pits on or near military bases. According to the Department of Defense (DOD), burn pit emissions can potentially harm human health. U.S. Central Command (CENTCOM) guidance directs the military's use of burn pits, and the Department of Veterans' Affairs (VA) provides healthcare and other benefits to veterans and their families. GAO was asked to report on the (1) extent of open pit burning in Afghanistan and Iraq, and whether the military has followed its guidance; (2) alternatives to burn pits, and whether the military has examined them; and (3) extent of efforts to monitor air quality and potential health impacts. GAO visited four burn pits in Iraq, reviewed DOD data on burn pits, and consulted DOD and VA officials and other experts. GAO was unable to visit burn pits in Afghanistan. The military has relied heavily on open pit burning in both conflicts, and operators of burn pits have not always followed relevant guidance to protect servicemembers from exposure to harmful emissions. According to DOD, U.S. military operations in Afghanistan and Iraq generate about 10 pounds of solid waste per soldier each day. The military has relied on open pit burning to dispose of this waste mainly because of its expedience. In August 2010, CENTCOM estimated there were 251 burn pits in Afghanistan and 22 in Iraq. CENTCOM officials said the number of burn pits is increasing in Afghanistan and decreasing in Iraq, which reflects U.S. troop reallocations and efforts to install waste incinerators. Despite its reliance on burn pits, CENTCOM did not issue comprehensive burn pit guidance until 2009. Furthermore, to varying degrees, operators of burn pits at four bases GAO visited in Iraq were not complying with key elements of this guidance, such as restrictions on the burning of items, including plastic, that produce harmful emissions. DOD officials also said that, from the start of each conflict, operators routinely burned items that are now prohibited. The continued burning of prohibited items has resulted from a number of factors, including the constraints of combat operations, resource limitations, and contracts with burn pit operators that do not reflect current guidance. Waste management alternatives could decrease the reliance on and exposure to burn pits, but DOD has been slow to implement alternatives or fully evaluate their benefits and costs, such as avoided future costs of potential health effects. Various DOD guidance documents discourage long-term use of burn pits, encourage the use of incinerators and landfills, or encourage waste minimization such as source reduction. DOD has installed 39 solid waste incinerators in Iraq and 20 in Afghanistan, and plans to install additional incinerators in Afghanistan. To date, source reduction practices have not been widely implemented in either country and recycling consists primarily of large scrap metals. DOD plans to increase recycling at its bases in Iraq, but recycling at bases in Afghanistan has been limited. Further, DOD has not fully analyzed its waste stream in either country and lacks the information to decrease the toxicity of its waste stream and enhance waste minimization. U.S. Forces in Afghanistan and Iraq do not sample or monitor burn pit emissions as provided by a key CENTCOM regulation, and the health impacts of burn pit exposure on individuals are not well understood, partly because the military does not collect required data on emissions or exposures from burn pits. Army public health officials have, however, sampled the ambient air at bases in each conflict and found high levels of particle pollution that causes health problems but is not unique to burn pits. These officials identified logistical and other challenges in monitoring burn pit emissions, and U.S. Forces have yet to establish pollutant monitoring systems. DOD and VA have commissioned studies to enhance their understanding of burn pit emissions, but the lack of data on emissions specific to burn pits and related exposures limit efforts to characterize potential health impacts on service personnel, contractors, and host-country nationals. Among other things, GAO recommends that the Secretary of Defense improve DOD's adherence to relevant guidance on burn pit operations and waste management, and analyze alternatives to its current practices. In commenting on a draft of this report, DOD said that it concurred with five of the six recommendations and partially concurred with the sixth. GAO addressed a DOD suggestion to clarify the sixth recommendation. VA reviewed the draft report and had no comments.
|
air, each operating under the terms of a license granted by the FCC. These stations are owned and operated by 176 entities that, under FCC rules, must either be: (1) a nonprofit educational institution, such as a university or a local school board (shown separately below as “university” and “local authority”); (2) a governmental entity other than a school, such as a state agency; or (3) another type of nonprofit educational entity, such as a “community” organization. Among these 176 licensees, some operate a single station, such as the Detroit Educational Television Foundation, which operates WTVS public television; others operate multiple stations, such as the Kentucky Authority for Educational Television, which has 16 stations on the air throughout the state. Figure 1 provides a breakdown of the number of licensees and stations they operate (by type of licensee). Licensees also differ by the size of their budgets, ranging from the smallest licensees, with total revenues below $3.5 million, to the largest, with total revenues exceeding $20 million. A few of the largest licensees are also among the most prominent producers of public television programming, such as WGBH in Boston, producer of Masterpiece Theatre, Arthur, and other notable series. Other licensees also produce programming for national distribution, such as KUHT in Houston, producer of The American Woodshop and the children’s program Mary Lou’s Flip Flop Shop. Programs intended for local and regional audiences are produced by many licensees, such as KDIN public television in Johnston, Iowa, producer of Iowa Press and Living in Iowa. Finally, public television licensees provide numerous services to their communities, such as programming-related outreach, formal educational services, literacy services, Amber Alerts for the abduction of children, and emergency weather information. Public television is characterized as a decentralized system, with all licensees owned and controlled at the local level. Stations exercise substantial discretion over programming decisions. This structure is due, in part, to the institutional and financial factors that motivated the founding of each individual public television station. Unlike commercial television stations, which typically involve business-related investment decisions, establishing a public television station entails a local-level commitment to the education and cultural enrichment of viewers. Further, whereas advertising revenues finance commercial television, public television has always been financed by both public and private sources. For fiscal year 2002 (the most recent data available), public television generated $1.63 billion in revenues, which came from a variety of sources: federal, state, and local government; private foundations; corporations; and subscribers (individual memberships) (see fig. 2). The Corporation’s funding of $263 million provided about 16 percent of this total. Figure 2: Sources of Public Television Revenues, Fiscal Year 2002 ($1.63 billion) The Educational Television Facilities Act of 1962 authorized the first form of federal funding support for public television, establishing a program in the former Department of Health, Education, and Welfare to provide grants to public broadcasting licensees for equipment and facilities. Soon thereafter, the Carnegie Commission on Educational Television, a national commission formed in 1965 with the sponsorship of the Carnegie Corporation, studied educational television’s financial needs. Based on recommendations in the Carnegie Commission’s 1967 report, President Lyndon Johnson proposed and the Congress enacted the Public Broadcasting Act of 1967, amending the Communications Act of 1934 to reauthorize funding for facilities and equipment grants and, among other provisions, to authorize funding for public television programming through a new entity—the Corporation for Public Broadcasting. The Corporation is organized under the act as a nongovernmental, nonprofit corporation to facilitate the growth and development of public television and radio broadcasting and the use of public television and radio broadcasting for instructional, educational, and cultural programming. In passing the 1967 act, the 90th Congress did not intend that annual authorizations and appropriations for the Corporation would serve as a permanent process for funding support of public broadcasting. Rather, they were seen as temporary measures pending the development and adoption of a long-term financing plan for public broadcasting. Although various financing proposals for public broadcasting have since been suggested, the Corporation continues to receive nearly all of its budget in the form of an annual federal appropriation. Figure 3 illustrates the history of annual federal appropriations made to the Corporation in current dollars. The Corporation is governed by a board of directors that is appointed by the President and confirmed by the Senate.The Corporation’s most recent mission statement, adopted by the board in July 1999, states that the Corporation is to facilitate the development of, and ensure universal access to, noncommercial high-quality programming and telecommunications services in conjunction with licensees. Reflecting the local and national characteristics of public television, the Corporation’s current goals include: (1) strengthening the value and viability of local stations as essential community institutions by improving their operational effectiveness and fiscal stability and increasing their capacity to invest in and create services and content to advance their mission and (2) developing economically sustainable, high-quality noncommercial programming that inspires, enlightens, and entertains. The most important work the Corporation has underway, according to a July 2003 memorandum to the board, is a systemwide planning study that addresses three facets of public television. First, to improve the financial sustainability of public television, the Corporation has determined that improvements in public television’s net revenues can occur by attracting increased financial support for stations from major donors and by developing new practices to improve the efficiency of stations’ operations. Second, through a strategic assessment of the local services provided by public broadcasting stations, the Corporation seeks to “help stations chart the course ahead” and aid in efforts to improve the financial sustainability of public television, provide direction for efficiencies in station operations, and inform decisions on national programming. Systemwide efforts related to national programming, the third area of focus, will address the “wide disconnect between audience research, national commissioning and scheduling decisions, and local service strategy.” According to the Corporation, this will involve strategic analysis and reengineering of national programming. Public television also faces the challenge of transitioning its broadcast operations from analog to digital technology. Unlike analog broadcasting, which converts moving pictures and sound into a “wave form” electrical signal, digital technology converts pictures and sound into a stream of digits consisting of zeros and ones that are transmitted over the air. Digital technology has the potential to significantly enhance the capabilities and services of all television broadcasters and is viewed as critical to the broadcast television industry’s ability to enhance its provision of communications services. The Telecommunications Act of 1996 established the framework for licensing digital television spectrum to existing broadcasters. Under FCC rules implementing this framework, public television licensees are required to complete the construction of digital station facilities by May 1, 2003; broadcast in digital a minimum of 50 percent of the programming that they broadcast in analog—known as “simulcasting”—as of November 1, 2003,simulcast 75 percent by April 2004, and simulcast 100 percent by April 2005; and by December 31, 2006, return their analog (or digital) spectrum to FCC for reallocation. In response to the difficulties faced by public television licensees in financing expenses related to the digital transition, the regulatory deadline for the construction of digital public television stations was set for May 1, 2003, a year later than the deadline for commercial stations. Further, eligible licensees were allowed to request extensions of time to meet the construction requirement if they had good cause for failing to meet the requirement. Provisions of the Communications Act, as amended, specify the allocation of federal funds appropriated to the Corporation for Public Broadcasting. Of the federal funds provided for public television, the Corporation is directed to distribute 75 percent of such funds among licensees of public television stations and 25 percent for support of national public television programming. Based on responses to our survey, more than three-fifths of licensees indicated that these statutory allocations for funding support of public television should stay the same, compared to about one-third that favored a change. Federal funds appropriated to the Corporation must be allocated in accordance with provisions of the Communications Act, as amended. As shown in figure 4, the act directs the Corporation to allocate 6 percent of its federal appropriation for various expenses incurred by public broadcasting, an account the Corporation identifies as “System Support;”not more than 5 percent is to be allocated for the Corporation’s administrative expenses; and of the remaining funds (about 89 percent), the act specifies that 75 percent is to be allocated for public television and 25 percent for public radio. Of the funds allocated for public television, 75 percent is to be made available for distribution among licensees of stations and 25 percent for national public television programming. For example, with a federal appropriation of $380 million for fiscal year 2004, the Corporation made the following allocations to its budget: $24 million (6 percent) for System Support; $17.8 million (5 percent) for administrative expenses; and of the remaining $338.2 million, $253.7 million (75 percent) for public television. Of these funds, $190.2 million (75 percent) is allocated for distribution among station licensees, and $63.4 million (25 percent) is allocated for support of national public television programming. We asked public television licensees in our survey whether the statutory allocations for federal funding support of public television by the Corporation—the 75 percent allocation for distribution among licensees and the 25 percent allocation for national programming—should remain the same or be changed. Overall, 62 percent of licensees responded that these statutory allocations should stay the same, and 34 percent responded that the allocations should be changed (see fig. 5). We further analyzed responses to this question factoring in the type (e.g., state, university, community, and local authority) and size (based on total revenues) of licensees, to determine whether the views of licensees on the statutory allocations vary on the basis of these characteristics. Our analysis indicates that the current allocations were favored by a majority of licensees of each type, with the exception of local authority licensees (see fig. 6) and by each size, based on total revenues (see fig. 7). Among the various types and sizes of licensees, those that most favored the current allocations were university licensees (71 percent of the 51 university licensees responding) and large licensees by total revenues (80 percent of the 20 large licensees responding). percent allocation, commented that even though additional federal funding for station operations would be useful, quality national programming is also important to support the station’s fundraising efforts. Of the respondents favoring a change in the allocations, most proposed that the allocation for support among licensees increase above the current level of 75 percent and the allocation for national programming decrease below 25 percent. In fact, several of these respondents suggested that all of the public television funds should be allocated among licensees, with no funds for national programming. Among the reasons cited for an increase in the allocation for licensees was the view that providing more of these funds to licensees, rather than to national programming entities, would advance the “local” quality of public television. Another reason given was that distributors of national programming would be more accountable and responsive to licensees’ local needs if more funds were allocated to licensees. In addition, one licensee noted that by placing the funds in the hands of licensees, a greater degree of insulation from political influence over national programming would be likely. However, a couple of licensees suggested that the 25 percent allocation for national programming should be increased and the 75 percent allocation for licensees decreased. One licensee suggested, for example, that despite the need for national programming, licensees would likely not pool funding necessary to produce national programming if all funds were distributed to licensees. Another licensee noted that funding for costly, high-quality, national programming should occur at the national level, and that local stations should obtain most of their financial support from their local communities. Community Service Grants, the principal mechanism by which the Corporation provides federal funding among licensees of public television stations, are to be awarded in accordance with applicable statutory provisions. Among these provisions is a requirement that the Corporation periodically review, in consultation with licensees, the eligibility criteria established by the Corporation for distribution of funds among public television stations. More than three-fourths of the licensees responding to our survey expressed overall satisfaction with the most recent consultation process. Another grant program, the Television Future Fund, was created by the Corporation to support projects to help public television achieve greater economic self-sufficiency. However, over 40 percent of licensees in our survey responded that the projects have not resulted in practical methods for reducing costs or enhancing revenues in their own operations. Moreover, our legal review of this program determined that the Corporation’s approach of supporting these projects, in part, with funds designated for distribution among licensees is not consistent with the statutory authority under which the Corporation operates. In September 2002, the Corporation temporarily suspended the awarding of further Television Future Fund grants pending the outcome of a review. The program has recently been reactivated under different procedures but continues to be funded, in part, with funds that the Congress has made available for distribution among licensees of public television stations. The Community Service Grant program is the principal mechanism by which the Corporation currently distributes federal funding among licensees of public television stations. Although not expressly established by the act, the Community Service Grant program is administered by the Corporation under the provisions of the act that provide for the allocation of funds for distribution among public television licensees. Statutory provisions requiring that the Corporation distribute funds directly among licensees were first enacted in 1975. Of the $190.2 million allocated for distribution among licensees in fiscal year 2004, the Corporation’s budget for the Community Service Grant program is $181.2 million. The Corporation currently administers the program by providing each licensee that operates an on-air public television station with a “basic” grant, as specifically required by the act. The $10,000 in funds awarded to each eligible licensee currently as the basic grant component of a Community Service Grant predates the establishment of the program and began soon after establishment of the Corporation. In addition to the basic grant, eligible licensees also receive two additional component grants in their Community Service Grant—a “base” grant and an “incentive” grant.Base grant funds are determined on the basis of the statutory allocations, the Corporation’s total annual appropriation, the number of licensees eligible for grants, and a fixed grant funding level set by the Corporation’s board of directors. Incentive grant funds depend largely on each individual licensee’s share of the combined amount of revenues generated from nonfederal sources. (See app. II for detailed information on the grant components of Community Service Grants.) The act specifies that the funds distributed through the 75 percent allocation may be used at the discretion of the recipient for purposes related primarily to the production or acquisition of programming.According to officials of the Corporation, this provision is generally understood to provide licensees with discretion to use such funds for any expenses incurred. In tandem with the act’s requirements setting forth the basis for distributing funds, the Corporation is required to review periodically the eligibility criteria for distributing these funds in consultation with licensees or their designated representatives. In practice, the Corporation has undertaken a review and consultation of the Community Service Grant program every 2 to 3 years. According to Corporation officials, a review and consultation consists of polling licensees and other public broadcasters to identify issues of concern regarding the distribution of funds and convening an advisory panel that broadly represents licensees to facilitate the review. Further, the Corporation develops and analyzes numerical models to assess likely impacts of recommended policy changes in the distribution of funds and disseminates information to licensees for further advice and consultation. Ultimately, the advisory panel’s recommendations are presented first to licensees and the Corporation’s management and then to the Corporation’s board, with any exceptions or refinements proposed by management for its vote of approval. In our survey, we asked licensees several questions about the Corporation’s most recent consultation on the eligibility criteria for distributing Community Service Grants, conducted in 2001. Over 80 percent of licensees responding said that they were aware of the 2001 consultation process. Slightly more than half of the respondents said the Corporation solicited input from them to a great or moderate extent. Half of the licensees said they provided input to the Corporation to a great or moderate extent. Overall, more than three-fourths of all licensees said they were either basically satisfied with the consultation process, or that only minor changes were needed (see fig. 8). changes. For example, several licensees indicated their belief that the Corporation predetermines the desired outcome of modifications to the Community Service Grant eligibility criteria and is not responsive to licensees. With regard to the make-up of the review panel, suggestions were made to rotate panel members, involve licensee officials that have not previously served on a review panel, and make the review panel more representative of the licensee community. One perspective highlighted by a few licensees was that small stations do not have adequate representation on the Corporation’s review panels. For example, one licensee said that small rural station licensees only have “token” representation on the Corporation’s review panels, and another noted the difficulty for officials of small station licensees to participate in review panels given the costs and time commitments for participating in the panel meetings. In both our survey and in interviews we conducted with licensees and officials from the Corporation, PBS, and the Association of Public Television Stations, specific factors in the eligibility criteria for grant award determinations were noted as causing some licensees to perceive disparities in the distribution of funds through the Corporation’s Community Service Grants. Among such factors were the policy which specifies that licensees operating stations in the same market (known as an “overlap” market) share a single base grant component of their Community Service Grants, the provision of supplemental funds in the incentive grant portion of the Community Service Grant for licensees that operate multiple public television stations, and an insufficient level of Community Service Grant funds provided to licensees to cover PBS membership assessment for access to PBS’s national programming. However, we were told that while modifying the eligibility criteria for establishing the base and incentive grant portions of Community Service Grants may result in an increase in the grant funds awarded to some licensees, it would also likely reduce the grant amounts awarded to others. Further, we were told that the Corporation makes every attempt to ensure that these grant funds are distributed fairly among public television licensees. For example, as a result of the 2001 review, the Corporation revised a policy previously adopted to increase the minimum level of nonfederal financial support that licensees must raise to $1 million beginning in fiscal year 2003 in order to receive the incentive portion of the Community Service Grant. As revised, the minimum level was set at $800,000. Concerned in the mid-1990s over the prospect of declining revenues from public television funding sources, including federal funding, the Corporation created the Television Future Fund in 1995 as a means of helping public television licensees achieve greater economic self sufficiency. The program provided grants to projects aimed at reducing stations’ operating cost and enhancing their revenues. Prior to the end of fiscal year 2003, the Television Future Fund awarded grants to licensees, consortia of licensees, and non-licensee entities (e.g., consultants) on the basis of project-specific criteria. Grant proposals were to show clear evidence that the project would meet a demonstrated need; actively involve a number of stations, have benefits beyond one individual station, offer economic returns that could be widely shared, and/or act as a model that could be widely replicated; prove, through feasibility studies, that concepts could be widely implemented, thus demonstrating that the effort can lead to economies of scale; be envisioned as long-term efforts, sustainable after the Corporation’s funding for the project concluded; and reflect a shared risk through funds provided by the applicant, thereby demonstrating an institutional commitment. In addition, all proposals were to demonstrate an awareness of systemwide efforts already under way and make use of existing resources, whether from public television or the private sector. To provide funding for Television Future Fund projects, the Corporation annually pooled funds from two separate sources: funds from its System Support account and funds from the 75 percent allocation for distribution among licensees. Between 1996 and 2004, $30.5 million came from the System Support account and $28.5 million from the licensee allocation.Based on recommendations of advisory panels comprised of station and system representatives, the Corporation awarded 204 Television Future Fund grants through September 2002 for a broad range of projects, including development projects aimed at improving fundraising through local, regional, and national underwriting efforts, strengthening pledge practices, and studying financial contributions given via the Internet; technology projects designed to increase the public television community’s knowledge of its digital capabilities, including developing interactive television programming; new service and business models projects aimed at forging links between the public television community and other entities, such as licensee and university partnerships; management information projects to improve efforts to manage and disseminate relevant data, such as a database used by licensees to compare their programming and fundraising activities with other licensees and a section of the Association of Public Television Stations’ Web site that contains information for both licensees and the public about the digital transition; collaboration and consolidation projects designed to support the development of back office operations that could be used by more than one station; and research projects aimed at improving the public television community’s understanding of viewers and the public television industry, such as updating the handbook for television programmers and a viewer panel study. Figure 9 illustrates the distribution of the types of Television Future Fund projects. According to Corporation officials, some licensees raised issues regarding how the program is funded and what benefits are being derived from it. We heard similar concerns while interviewing several licensees. To evaluate these concerns, we asked licensees in our survey to indicate the extent to which they knew about the findings and outcomes of Television Future Fund projects, whether any such projects resulted in practical methods for enhancing revenues or reducing costs in licensees’ own operations, and whether they supported the way in which the Television Future Fund is funded. The extent of the licensees’ knowledge of Television Future Fund projects varied significantly. Of licensees responding to our survey, 58 percent stated that they knew about Television Future Fund projects to a great or moderate extent, but the other 42 percent indicated that they knew about the findings and outcomes of Television Future Fund projects to little or no extent (see fig. 10). Several licensees noted that they did not know about the findings and outcomes of Television Future Fund projects because of inadequate efforts by the Corporation to distribute information about the projects. For example, the Corporation did not compile and distribute to licensees, or release publicly, a list of the findings and outcomes of Television Future Fund projects until November 2001, 5 years after the first grants were awarded. One licensee stated that although there has always been sufficient information about the awarding of Television Future Fund grants, there has been little information on the outcomes of the projects supported by those grants. practical methods for enhancing revenues. In cross-tabulating these responses, we determined that, overall, only 41 percent of licensees responded that Television Future Fund projects had provided them with practical methods for reducing costs and/or enhancing revenues (see fig. 11). by philanthropic foundations. Over one-fifth of our survey respondents, however, indicated that the Corporation should cease all funding for the Television Future Fund. In September 2002, the Corporation suspended the award of further Television Future Fund grants pending a review of the program to (1) assess the consistency between the planning and execution of the program in relation to the Corporation’s goals and (2) determine how the program could address concerns that the public broadcast mission and business models were no longer adequate in the digital era. In the course of its review, the Corporation’s Future Fund Advisory Panelconcluded that while a majority of the projects had yielded the results anticipated, some were not successful for reasons that included an inability to achieve appropriate scale or significant economic benefit, inadequately defined objectives and poor execution, and inadequate marketing of results to stations. The panel solicited comments from the public television community on how the Future Fund could best be used to help stations maximize their financial resources and invest these resources in new and strengthened service to their local communities. Based on input from public television stakeholders and its own deliberations, the panel developed four new criteria to guide the investment of funds. Specifically, Television Future Fund initiatives should have the potential to change systemwide decision making and transform current approaches to achieving system and station goals, measurable and sustainable outcomes, strong and verifiable support of key advocates and participants, and consistency with the Corporation’s legislative mandate. In addition, the panel recommended changes in how Television Future Fund initiatives are developed and supported. Rather than continuing to invite proposals on a broad array of themes, as had been done in the past, the panel recommended that the solicitations more directly define the initiatives’ intended outcomes for participants, the station community, and the system overall. The panel also recommended that funding commitments be made over longer time frames at higher monetary levels in order to focus on fewer initiatives that have greater impact. The panel called for improved project management, with clearly defined expectations and performance measures and a clear definition of success. To evaluate and monitor the progress of the initiatives, the panel recommended that the membership of the Future Fund Advisory Panel include greater representation from across the station community. planning study discussed earlier. According to Corporation officials, it was anticipated that the Future Fund program would help support some of the new initiatives and projects stemming from the planning study. The Television Future Fund was reactivated in November 2003 with the advisory panel endorsing several funding grants. In a December 2003 memorandum to station managers, the Corporation outlined the new Television Future Fund review and selection process and described three Future Fund projects that were in progress: (1) the Major Giving Initiative aimed at helping stations attract financial support from major donors—an area of opportunity identified in the Corporation’s systemwide planning study; (2) the Education Leadership Academy, a pilot effort to identify opportunities for improved community partnership in elementary and secondary school education; and (3) an online knowledge base to improve public television’s fundraising potential, strategies, and practices. Corporation officials noted that 90 of the 176 licensees have signed up to participate in the first Major Giving Initiative workshop, and they expect licensees to participate in another workshop to be held later this year. In addition, they noted that the Future Fund was used to cover the participation of about 110 station personnel in a 2-day concentrated track of sessions for the Education Leadership Academy. counsels to support their differing positions on this issue. As part of our review, we examined the Corporation’s statutory authority to use funds allocated for distribution among public television licensees to support the Television Future Fund. Although our legal review focused on the program as it was constituted prior to its recent revisions, the recent changes made do not appear to have solved the legal deficiencies that we identified. As reconstituted, the Future Fund program still is funded, in part, with funds designated by the Congress for distribution among public television licensees. According to the Corporation, its authority to establish “eligibility criteria,” and the formula under which the funds are disbursed, is broad enough to allow the Corporation to take a portion of the funds allocated for distribution among licensees, pool them with System Support funds, and use this aggregated pool of money to make selective grants only to applicants submitting project proposals acceptable to the Corporation after being reviewed and recommended by a review panel. We disagree. The difference between our view and that of the Corporation rests on whether the eligibility criteria the Corporation may adopt include project-focused criteria that govern the selective award of funds for a particular project (as the Corporation maintains) or whether eligibility criteria the Corporation may adopt include only station-based criteria that distinguish among public television licensees on the basis of such factors as financial needs, audience satisfaction, or fundraising effectiveness. It is our view that the phrase “eligibility criteria” should be read in the context of the distribution mechanism to mean criteria focusing on the eligibility of the licensees, rather than the eligibility of the projects. Although we often defer to an agency’s interpretation of a statute it is charged to administer, we cannot do that here because the Corporation’s interpretation of its authority is neither consistent with the statutory language nor the Congress’ policy choice favoring local, not Corporation, control of the expenditure of the funds allocated for licensees. Fundamentally, we believe that the Corporation’s interpretation of the statutory language changes the basic nature and control over the expenditure of the funds allocated for licensees. First, the language of the distribution provision makes no reference to funding specific projects. By contrast, the Congress has provided the Corporation with specific authority to fund projects using system support funds. Second, the statute and its legislative history reflect a clear division of roles vis-à-vis the Corporation and the licensees and permittees of public television stations. Under the statutory scheme, it is the Corporation that is responsible for distributing funds to the licensees, and it is the recipients of these funds that are granted the discretion over how they are to be used. Thus, in the context of the entire statutory scheme, these funds would not be available for project-specific systemwide grants. Moreover, as implemented by the Corporation, the Television Future Fund grants are available to nonstation entities. We believe this is inconsistent with the direction in the statute regarding the fact that the funds are to be distributed among licensees of public television stations. For example, an award was given to a consultant to conduct studies to identify skills that will be needed by chief executive officers of public television stations in the next decade. Another award was given to a consultant group to study the perception of public television by its current and potential financial supporters. In our view, the funds allocated by statute for distribution among licensees are not available to nonstation entities. Appendix III presents our legal opinion in detail. 2005 monies designated for distribution among licensees would no longer be used to support the Future Fund. The officials said that they would be developing a proposal for the board’s vote before the end of fiscal year 2004. Meanwhile, the ongoing and planned projects will continue to be supported from the balance in the Television Future Fund account, which amounted to about $18.3 million as of December 31, 2003. According to the Corporation, $10.1 million of this balance came from funds designated for distribution among licensees from fiscal year 2004 and previous fiscal years; the remaining $8.2 million came from System Support funds. Approximately $8.4 million of the $18.3 million in the account balance has been committed for ongoing projects, mostly for the Major Giving Initiative ($6.6 million). The remaining $9.9 million has been “earmarked” by the Future Fund Advisory Panel for several other major initiatives that are under development. Provisions of the Communications Act govern the Corporation’s support for the production and distribution of national programming. The Corporation provides PBS with an annual grant to help support its National Program Service, a package of children’s and prime-time series that are broadcast by most public television stations. In response to our survey, most licensees expressed support for continuation of the Corporation’s annual grant to PBS for the National Program Service and held the view that the Service’s programming enables them to meet their mission and build underwriting and membership support. Many licensees also emphasized the importance of producing their own programs to meet the needs of their local communities, suggesting that federal funds should be made available for the production of local programming. Expressly prohibited from producing or distributing public television programming, the Corporation is authorized by provisions of the Communications Act to provide federal funding for national public television programming. Under the act, the Corporation is directed to distribute a substantial amount of available programming funds to independent producers and production entities, producers of national children’s educational programming, and producers of programming addressing the needs and interests of minorities. In fulfillment of this mandate, the Corporation provides programming support through three mechanisms—the General Program Fund, the Program Challenge Fund, and an annual grant to PBS for the production and distribution of some of public television’s best known or “signature” series, a package known as the “National Program Service” (see fig. 13). Some of the productions supported through the Program Challenge Fund and the General Program Fund are broadcast as part of the PBS National Program Service. The Independent Television Service was founded in 1988. See, 47 U.S.C. §396(k)(3)(B)(iii). The Minority Consortia consist of the following organizations: National Black Programming Consortium, Native American Public Telecommunications, Latino Public Broadcasting, National Asian American Telecommunications Association, and Pacific Islanders in Communications. amounted to only 14 percent of the $450 million in total funds used for such programming in fiscal year 2003. Many of the best-known programs associated with public television are part of PBS’s National Program Service. The Service currently includes miniseries, specials, and children’s and prime-time series—including Sesame Street, NOVA, The NewsHour with Jim Lehrer, and American Experience—providing PBS member-stations with approximately 2,100 hours of programming in 2003. The Corporation’s annual grant of $22.5 million to PBS makes up only a small portion of the funds that finance the National Program Service; a large source of the Service’s financing comes from public television station licensees that collectively paid $126 million in 2003 membership assessments to PBS for programming and related broadcast rights to the Service’s programs. In 2003, 171 of the 176 public television licensees were PBS members. The National Program Service is distributed to PBS member-stations for broadcast either at the time of their delivery or at a time of the licensees’ choosing. Member-stations are free to choose which of the Service’s programs to broadcast, although PBS officials stated that licensees receive no reduction or rebate in their assessment for programming that is not broadcast. Our survey asked a series of questions about the National Program Service. In response to our question on whether the Corporation should continue to provide direct funding for the Service at its current level, 72 percent of the responding licensees answered “yes.” Some licensees stated that the quality of the programs included in the Service would suffer without continued funding from the Corporation. Of the 19 percent of the licensees who indicated that a change was needed, most suggested that the funding be reduced or eliminated and be given instead to the licensees. Concerns with the process that PBS uses to choose the programs selected for the National Program Service were also noted in some of our interviews with officials of public television licensees. The Corporation’s annual grant to PBS for the National Program Service was instituted as a result of a statutory provision enacted in 1988 requiring that the Corporation study and submit a plan to the Congress for funding support of national public television programming. Prior to the establishment of the National Program Service, grants were awarded by the Corporation directly to several of the producers of programming included on the PBS national schedule. Other programs were made part of the national schedule through a mechanism known as the “Station Program Cooperative.” Through the Cooperative, officials from public television stations would vote on which individual programs to include on the national schedule and participate in a “group buy”—combining their funds for the purchase of programming for distribution by PBS. However, concerns arose that the Station Program Cooperative model was not effective in the establishment of programming priorities, the production of minority programming, or the ability of producers to effectively attract underwriters. In 1989, a National Program Funding Task Force—comprised of representatives from the Corporation, public television stations, PBS, independent producers, and other stakeholders—was formed to review the method of funding national programming. This review led to the replacement of the Station Program Cooperative with a new model for selecting PBS programming. Under this new model, PBS created the position of chief programming executive to make programming decisions. Currently, two chief programming executives located on the East and West coasts, respectively, select programs for the National Program Service with input from licensees, internal PBS programming staff, and PBS management. This approach was designed to facilitate the centralized development and purchasing of programming for the National Program Service and for other programming distributed nationally under the PBS logo—including children’s, prime time, and syndicated programs. Our survey of licensees found that only a small percentage expressed a desire to reinstate the former Station Program Cooperative or a similar model to select programming for PBS’s National Program Service. However, a majority of the survey respondents, 58 percent, indicated that changes were needed in the process for selecting programs for the Service. Specifically, respondents suggested that PBS solicit more input from licensees in making the selections. Some licensees we interviewed commented that the strong relationship between PBS and producers has created an entrenched system that limits the ability of new producers to get their programs on the National Program Service. Figure 14 highlights the licensees’ views on the Corporation’s funding of the PBS National Program Service and the process used to select the programs that are included in the Service. While our survey shows that over half of the licensees indicated that changes are needed in the selection process for the PBS National Program Service, most respondents nevertheless indicated satisfaction with the extent to which the Service’s programming helps them meet the missions of their stations. few licensees also noted that PBS provides a “safe harbor” of children’s programs that are distinct from their commercial counterparts. Children’s programming was viewed as more important to licensees’ missions than to building underwriting and membership support because only 23 percent of the licensees responding to our survey indicated that they rely to a great extent on children’s programming to build such support, and 39 percent said they rely on such programming to a moderate extent. Many licensees stated that they do not rely on children’s programming for underwriting support because of content restrictions and because underwriters do not see a strong market in the viewers of such programs. However, a few licensees stated that some underwriters support children’s programs because of their high quality and their educational and social value. Several licensees stated that they do not rely on children’s programming to generate membership support because families with young children often do not have the economic means to contribute financially. Licensees also indicated that they value the prime-time programs on the National Program Service, with 96 percent of the respondents indicating that prime-time programs help them meet their mission to a great or moderate extent. As noted above, some licensees criticized the programs for having become less unique, less innovative, and less willing to explore controversial issues in recent years. However, most licensees stated that they rely on the prime-time programs included in the National Program Service to meet their mission of providing quality life-long educational content for adults of all ages. Many licensees added that the prime-time programs allow them to compete with commercial stations, attract new audiences, and retain existing viewers. Our survey also showed that 91 percent of the licensees believe that prime-time programs help them build local underwriting and membership support to a great or moderate extent. According to the licensees, some of the reasons that the prime-time programs are helpful in attracting local underwriters are that audience numbers are higher, the program titles are familiar, and the programs themselves are of high quality and are well promoted. Figures 15 and 16 summarize the responses of licensees to questions regarding the National Program Service’s children’s and prime-time programming. national program. This method allows individual licensees the possibility of extending the value of national promotions to such local programs. For example, local stations in several cities took advantage of the popularity of the PBS series Jazz: A Film by Ken Burns, broadcast in 2001, by producing local programs that featured local and regional jazz musicians and cultural influences. Previously, the Corporation funded regional organizations that provided licensees with specialized content for their areas. In 1961, the Eastern Educational Network began as a collaboration of public television stations in the northeastern United States that produced regional programs for its member stations. Other regional collaborations were formed to provide similar functions, such as the Southern Educational Communications Association, the Central Educational Network, and the Pacific Mountain Network. However, over the last decade, almost all of these regional organizations have changed their focus to provide quality national programming to members nationwide. In 1997, the Southern Educational Communications Association and the Pacific Mountain Network joined to form the National Educational Telecommunications Association, a membership organization that offers a library of national programs to licensees. Rather than paying for or obtaining the rights to programs, public television producers give to the National Educational Telecommunications Association the rights to distribute the programs; in return, the association provides producers with basic promotion of programming on its Web site and a forum for licensees to exchange products. In 1998, the Eastern Educational Network became American Public Television, which acquires finished programs and develops and coproduces original programming in a variety of genres, including documentaries, biographies, and instructional programs, among others. In our survey, some licensees indicated that public television stations are rapidly becoming the only locally owned and operated television broadcast medium. They stated that the consolidation of local media outlets and expanding national cable and satellite networks have resulted in less local programming on commercial television, creating a void in their communities. They believe that their locally produced programs set them apart from commercial television and allow them to provide their communities with a unique product that contributes to the civic and cultural lives of their viewers. However, 79 percent of the licensees responding to our survey indicated that the amount of local programming they currently produce is not sufficient to meet local community needs (see fig. 17). Moreover, of the 139 licensees that provided narrative comments regarding this issue, 85 stated that they do not have adequate funds for local programming or that they would produce more local programming if they could obtain additional sources of funding. Several licensees stated that they have had to ignore local issues and turn away programming opportunities because they lacked the financial resources to produce them. television and has more of a direct impact on the community. However, among the licensees who expressed a willingness to sacrifice funding for national programming to fund local productions, some warned that taking too much from national programming would be harmful to the entire system. Digital technology offers public television licensees opportunities to provide innovative services to their communities. The Corporation received additional funding of $93.4 million for the digital transition for fiscal years 2001 through 2003. After consultation with representatives of the public television community, the Corporation directed these funds toward providing grants to licensees for acquiring digital transmission equipment. However, some licensees did not receive their grants in a timely manner and cited this as contributing to their failure to meet FCC’s initial May 2003 deadline for constructing digital transmission facilities. At the systemwide level, the Corporation is seeking funding for infrastructure improvements to fully leverage the potential benefits of the digital transition. In addition, the Corporation, licensees, and other public television stakeholders have emphasized the importance of support for the production of digital content as part of the transition. Various mechanisms, including additional federal funding, have been suggested to address these needs. The Corporation, licensees, and other public television stakeholders have emphasized that the future of public television depends on the successful rollout of digital services. Such services would, in the view of public television stakeholders, help public television realize the full potential of digital technology, solidify existing audiences, and reach new viewers in an era of increased competition from cable and satellite television providers.Nearly all of licenses in our survey reported that they either now have, or plan to have, key digital capabilities to produce sharper television pictures and CD-quality sound (high-definition), offer multiple channels for programming and data services (“multicasting”), and transmit text and other data in a digital format (“datacasting”) (see fig. 19). About 85 percent of the licensees responding to our survey indicated that successful completion of the digital transition would improve their ability to serve their communities to a great or moderate extent. Many of the digital-based services mentioned by licensees involve supporting educational, governmental, and cultural activities. Educational services include the delivery of on-demand instructional content material to teachers and students in K-12 classrooms, higher education institutions, and libraries. Local and state governmental services include emergency response services and alerts, such as Amber Alerts for child abductions. In addition, licensees noted that multicasting would allow for an increased range of cultural content, such as programs that highlight local arts or serve minority populations. Many licensees also indicated their intention to use digital technology to provide “ancillary and supplementary” services. These are nonbroadcast services, such as subscription-based video services, paging services, and computer software distribution, offered by stations to generate revenue. Fifty-one percent of the licensees indicated they are offering or would offer these services to nonprofit entities, while slightly more than one-third of licensees indicated they would offer these services to for-profit entities. The Corporation, licensees, and other public television stakeholders have identified the importance of federal and nonfederal support for the digital transition that enables public broadcasters to provide a full range of digital services to their communities. In 1997, the Corporation and other public television stakeholders estimated the costs of the digital transition for public television stations to be approximately $1.7 billion, largely for transmission equipment.At that time, the Corporation, PBS, and other stakeholders proposed a plan under which the majority of this cost would be funded by nonfederal sources, such as state governments, foundations, and corporations, and about $771 million (45 percent) would be funded through federal funds. In the plan, the Corporation also requested an increase of $100 million in its regular fiscal year 2000 appropriation for the acquisition, enrichment, and production of digital programming and services. For fiscal years 2000 and 2001, the Clinton administration proposed a funding approach whereby the National Telecommunications and Information Administration’s (NTIA) Public Telecommunications Facilities Program, a source of financial support for public television infrastructure, would provide federal funding for licensees to acquire digital equipment. The Corporation, for its part, would provide federal funding to support digital programming production, development, and distribution. Although this initial funding approach included federal funding for both digital equipment and digital programming, most of the federal funds that have been awarded through fiscal year 2003 have been for digital equipment.NTIA began awarding grants to public television licensees for digital transmission equipment in fiscal year 1998. Although specific appropriations for the digital transition were made for the Corporation in fiscal years 1999 and 2000—at $15 million and $10 million, respectively— both were contingent on the enactment of an authorization which did not occur. The Corporation received its first specific digital appropriation ($20 million) in August of fiscal year 2001 after the enactment of both an appropriation and an authorizing provision. A second digital appropriation ($25 million) was received in February 2002. The Corporation, relying on report language accompanying its fiscal year 2002 appropriation and considering the limited funds available to licensees from NTIA, determined that the highest priority for its digital funds was to assist as many licensees as possible in meeting FCC’s May 2003 deadline for constructing digital transmission facilities. Accordingly, the Corporation developed two grant programs to help licensees acquire basic digital transmission equipment— the Digital Distribution Fund and the Digital Universal Service Fund. The Digital Distribution Fund, established in January 2002, offers grants to both individual stations and collaborations of multiple stations for digital transmission equipment; the Corporation provides 50 percent matching funds to the nonfederal funds raised by grantees. The Digital Universal Service Fund was established in June 2002 to take advantage of FCC’s 2001 decision permitting licensees to satisfy the May 2003 construction deadline by initially constructing digital facilities that use power levels that are lower than what is needed to fully cover their service areas. Stations can then increase their power levels over time to full-power operation.This program is designed to provide grant recipients with a standard package of equipment for use in constructing a low-power digital facility. The Corporation funds up to 75 percent of the cost of the equipment packages, with the remaining cost covered by grant recipients with nonfederal funds. Both Corporation and NTIA officials told us they coordinate their grant programs to ensure that there is no duplication in the types of transmission equipment purchased by licensees with funds from their respective programs. Figure 20 provides a time line of the Corporation’s activities up to November 2003 for awarding funds through these two digital grant programs. The Corporation used its fiscal year 2001 and 2002 digital appropriations to award grants to 96 stations for digital transmission equipment prior to FCC’s May 2003 construction deadline. However, the Corporation was not always timely in getting the awarded equipment packages or funds to the grantees. Specifically, 30 stations did not receive their equipment packages or funds by the deadline. Most of these stations were recipients of equipment package grants from the Digital Universal Service Fund. Public television stations that did not expect to meet the construction deadline had to apply to the FCC for a 6-month extension. In requests to FCC for extensions, 28 of the 30 stations cited the delay in receiving their digital grant from the Corporation as a contributing factor, among others, as to why they filed for an extension. We identified two reasons for the Corporation’s lack of timeliness in distributing its fiscal year 2001 and 2002 digital appropriations. First, the Corporation took several months after receiving its digital funds to (1) convene consultation panels comprised of licensees (or their designated representatives) to develop recommendations for the use of those funds and (2) obtain approval of the panels’ recommendations by the Corporation’s board. Second, the Corporation had to devise grant programs for the distribution of its digital appropriations. When the Corporation’s board initially approved the use of the funds for transmission equipment in November 2001, the Corporation did not have any equipment related grant programs in place. Due to its inexperience in this area, the Corporation contracted with PBS (which had staff with expertise in transmission technology) for assistance in developing and administering these programs. As a result, the first Digital Distribution Fund grants were not awarded until 9 months after the first digital appropriation was received by the Corporation in August 2001. With regard to the Digital Universal Service Fund, the administration contract between the Corporation and PBS and the equipment contracts negotiated between PBS and 2 manufacturers for low-power transmission equipment were not finalized until 2 months before the May 2003 construction deadline. Only 15 of the 43 stations that were awarded a Digital Universal Service Fund grant received their equipment package by the May deadline. The Corporation also had difficulties distributing its fiscal year 2003 digital appropriation of $48.4 million, of which $37.4 million was allocated for public television.Having received all of its 2003 funds by March 2003, the consultation panel process again took several months to develop recommendations for the use of these funds and obtain the approval of the Corporation’s board. In July 2003, the panel recommended two phases of grant awards for these fiscal year 2003 funds, the first of which was to continue funding for licensees’ digital transmission equipment. The application period for this first phase extended from August to October 2003. Although 201 stations had filed for a 6-month extension to FCC’s May 2003 construction deadline, only 26 stations applied to the Corporation for a digital grant during this first phase. Of these 26 applicants, 23 stations received grants from the Corporation, totaling $7 million. None of the new grantees, however, received its funds or equipment package prior to the end of the 6-month extension period in November 2003. As of December 2003, $24 million of the Corporation’s fiscal year 2003 digital appropriation—more than two-thirds of the total fiscal year 2003 amount for television—remained unobligated, with 126 stations operating under a second 6-month extension for meeting FCC’s digital construction requirement. In a survey commissioned by the Corporation and PBS of licensees with stations that had not met the May 2003 deadline or previously applied for a Corporation grant, the most common response for why a station had not or was not planning to apply for this phase of funding was because they had been able to secure funding through other sources. Survey respondents suggested that they would consider applying for future grant rounds of the Digital Distribution Fund if it awarded funding for transmission equipment upgrades from low to full power, digital master control facilities that control broadcast management, and studio and production equipment to create digital content. Because many of these licensees were able to secure funding from other sources, funding priorities for these licensees and for those that met the May 2003 deadline had shifted from transmission equipment to other digital transition needs not included in the scope of the grant programs. In our survey, we, too, found that licensees’ priorities for additional federal funding of the digital transition were in areas other than transmission equipment. Only 14 percent of the respondents indicated that digital transmission equipment was their top priority for additional federal funding and over half indicated that it was their lowest (see fig. 21).Digital master control, digital content, digital production equipment, and digital operating costs were all named more frequently as the highest priority. and repeaters. At the time we concluded our audit work in February 2004, Corporation officials indicated that applications were due in March and that a digital review panel was scheduled to meet at the end of that month to review the applications. Corporation officials also indicated that the digital consultation panel would meet in early March to provide guidance on allocating the $49.7 million made available to the Corporation in fiscal year 2004 appropriations for the digital transition. In addition to supporting licensees in constructing their digital transmission facilities, the Corporation and PBS have identified systemwide infrastructure improvements as important in maximizing the benefits of the digital transition. The development of digital content and production is also becoming more important as more public television stations become digital ready. Under the Communications Act, the Corporation is to assist in the establishment and development of an interconnection system to facilitate the distribution of public television service. The current interconnection system, which is managed by PBS under agreement with the Corporation, uses satellites to distribute PBS and other programming to stations and is scheduled for replacement by the time the current leases for satellite capacity expire in 2006. As proposed by the Corporation and PBS, a new system, called the “Next Generation Interconnection System,” would replace the current system with a digital one that distributes programming in real-time and nonreal time to licensees. Licensees can then store these programs for later broadcast, which in turn allows PBS to become more efficient by broadcasting these programs to licensees once instead of multiple times. The Corporation and PBS have estimated that it will cost $177 million to replace the interconnection system. The Corporation has requested that the cost be covered by federal appropriations during fiscal years 2004 through 2006. The Corporation received an initial $10 million appropriation for fiscal year 2004 for this purpose. In addition, PBS is separately seeking funds from the Corporation for a project to provide enhancements to the new interconnection system. This effort, known as the Enhanced Interconnection Optimization Project, is designed to allow licensees and PBS to schedule and manage the digital broadcasting of public television programs through the use of automated channel operations and monitoring. According to PBS, this system will cost approximately $12 million to $15 million to implement at its facilities. PBS told us that approximately $8 million is still needed, half of which it is seeking from the Corporation. The individual stations will also need to implement the interconnection project at their ends. PBS has estimated that a typical station-side installation costs between $1 million and $1.2 million. The Corporation’s consultation panel for digital funds recommended in July 2003 that PBS receive $4.1 million for the project from the Corporation’s fiscal year 2003 digital transition funds. While some licensees noted that this project has potential to bring about substantial savings and improved operations for licensees, others expressed concerns about increased maintenance costs, stranded investments in digital master control equipment bought before the project was announced, and a lack of detailed information to assess the costs and usefulness of the project. For example, in our survey, about 25 percent of the licensees responding said that they have already acquired some types of digital equipment (master control, production, or storage) that are not fully compatible with the project, which may limit the capabilities and usefulness of the project to them. New equipment may need to be acquired in order to obtain the full benefits of the project. In response to concerns about the potential incompatibility of some licensees’ existing digital equipment with the project, the Corporation has conditioned the award of its $4.1 million grant to PBS on an independent review of the project. working group—funded by the Corporation and comprised of Corporation and PBS officials, as well as public television licensees—highlighted this need in a 2003 report, which stated that the digital transition provides public television with an opportunity to reposition itself to carry out its mission if it is willing to create digital services that are “more responsive to the needs of our constituents and cheaper, simpler, smaller, and more convenient to use.” Noting that 2 years’ advance time may be needed to plan, develop, and launch digital services, and that digital production costs are generally higher than the costs of creating analog programming, the Corporation has characterized the need for digital content and research as “even more pressing” due to the limited availability of past federal funding for the digital transition. Corporation officials told us that licensees and other national public television organizations, including the Corporation, are developing a systemwide strategic plan on the future of public television that includes the creation of digital content. As part of this planning, the Corporation is in discussions with PBS on the need to develop a new national programming plan to support digital content needs. Many public television stakeholders have indicated a need for additional federal funds to support the digital transition and fully utilize the potential of digital television. Several licensees in our survey, however, suggested changes to some of the Corporation’s existing funding mechanisms to help manage such needs. Among the suggested changes were limiting the Corporation’s digital grants to one licensee in a market served by multiple licensees; offering grants to support shared operations, such as digital master control equipment, among public television stations in the same market; and eliminating duplication of public television stations in markets served by multiple licensees. However, several licensees in markets with multiple stations believe that they provide valuable services and unique programming to their communities. In addition, some public television stakeholders have observed that Corporation funds should be repositioned in order to achieve the benefits of the digital transition.According to these stakeholders, the Corporation should foster new collaborative services by supporting the provision of digital content favoring alternative distribution platforms such as the Internet over the traditional medium of over-air broadcasting. These services include interactive Web sites that provide audio and video content on subjects such as history, science, and literature. Unlike over-air broadcasting, interactive Web sites would allow people to access this content regardless of their location. Stakeholders have noted that such services would encourage collaboration of licensees without diminishing their local presence and that this approach may help public television strengthen its mission to provide high-quality noncommercial programming and services. A long-standing issue for the pubic television community is how best to distribute the Corporation’s funds among local station operations, national programming, and infrastructure support. Most licensees responding to our survey supported the existing statutory allocation of the Corporation’s television funds between licensees and national programming and were generally satisfied with the Corporation’s process for periodically reviewing the eligibility criteria for distributing funds through Community Service Grants. In addition, most licensees expressed their support for the Corporation’s continued funding for PBS’s National Program Service, which nearly all see as helping them meet their missions for providing quality children’s and prime-time programming. As for local programming, most licensees indicated that the amount of local programming they produced was not sufficient to meet their communities’ needs, largely due to their limited financial resources. The Corporation’s approach for funding its Television Future Fund program is a concern for many licensees. As our survey shows, only 30 percent of respondents agreed with the Corporation’s current approach of using funds designated for distribution among licensees to support Television Future Fund projects. The Corporation, as informed by counsel, contends that it has the authority to use these funds to support the Television Future Fund program. It is our view that the Corporation may not take a portion of the funds designated by the Congress for distribution among public television licensees, pool them with System Support funds, and use them to make competitive grants only to applicants submitting project proposals acceptable to the Corporation after review and recommendation by an advisory panel. Although our legal analysis focused on the Television Future Fund program as it existed prior to the end of fiscal year 2003, we note that under the revised program, the Corporation is still aggregating the funds and using them for projects that benefit the entire system rather than giving the monies directly to the individual licensees. Moreover, it appears that the majority of the funds will be going to vendors rather than the stations. Accordingly, we continue to question whether the Corporation has the authority to utilize in this fashion the $10.1 million of the $18.3 million currently in the Television Future Fund account that came from funds designated for distribution among licensees. The Corporation’s support for the digital transition is another area of concern. As shown by our survey, the priorities of most licensees in 2003 shifted beyond the digital transmission equipment supported by Corporation grants. This contributed to a low application rate for the Corporation’s digital grants in the latter half of that year and a carryover of $24 million in digital transition funds into calendar year 2004. While the Corporation is broadening the scope of its digital transition grants in 2004, the licensees’ priorities for digital production equipment and digital content still are not included in the Corporation’s digital transition funding. We recommend that the Corporation for Public Broadcasting take the following two actions regarding the Television Future Fund and its digital transition funds: Before making further Television Future Fund awards or expending any funds in the Television Future Fund account, the Corporation should request specific statutory authority to do so, if it intends to continue using funds that were designated for distribution among licensees. Should this specific authority not be obtained, the Corporation should return to the licensees such funds remaining in the Television Future Fund account that came from the funds designated for distribution among licensees. The Corporation should broaden the scope of its digital transition funding support to include digital production equipment and digital content. We provided a draft of this report to the Corporation for Public Broadcasting and to the Public Broadcasting Service for their review and comments. The Corporation agreed with our recommendation to broaden the scope of its digital funding to include production equipment and content, consistent with congressional directives and station needs after consultation with licensees or their designated representatives. The Corporation stated that it recognizes that stations are at various stages in the conversion process and that all station needs are being given careful consideration in consultations on the distribution of fiscal year 2004 digital transition funds. The Corporation did not agree with our recommendation that the Corporation should request specific statutory authority before making further Television Future Fund awards or expending any funds in the Television Future Fund account. The Corporation’s comments include a legal memorandum from its outside counsel which concludes that the Television Future Fund is fully consistent with the Communications Act of 1934, as amended. For the most part, the legal memorandum raises the same arguments that we have addressed in our opinion. However, one argument raised for the first time involves the “doctrine of ratification.” The Corporation cites to cases holding that when the Congress reenacts, without change, statutory terms that have been given a consistent judicial or administrative interpretation, the Congress has expressed an intention to adopt that interpretation. The Corporation uses this doctrine to support its contention that the Congress has consistently replenished funds designated for distribution among licensees knowing that a portion of these funds are being used for Future Fund projects. Thus, the Corporation contends that the Congress has, in essence, ratified by appropriation the Corporation’s interpretation of the statute. However, as recognized by GAO opinions summarizing the test that courts have used to find ratification by appropriation, three factors generally must be present to conclude that the Congress, through the appropriations process, has ratified agency action. First, the agency takes the action pursuant to at least arguable authority; second, the Congress has specific knowledge of the facts; and third, the appropriation of funds clearly bestows the claimed authority. None of these factors is present here. The Corporation’s comments and our response to points raised by its attached legal memorandum are included in appendix VII. generating program input from member stations and will seek counsel from the Content Policy Committee of its board on how best to improve its systems for securing member input. PBS also provided additional information to clarify the respective funding needs of the Enhanced Interconnection Optimization Project and the Next Generation Interconnection System. We also provided a draft of the report to FCC and have incorporated FCC’s technical comments where appropriate. If the Congress supports the concept of using funds that were designated for distribution among licensees to finance the Television Future Fund program, it should provide the Corporation with the authority to use the funds for this purpose. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the appropriate congressional committees, the President and Chief Executive Officer of the Corporation for Public Broadcasting, the President and Chief Executive Officer of the Public Broadcasting Service, the Chairman of the Federal Communications Commission, and others who are interested. We also will make copies available to others who request them. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please contact me on (202) 512-2834 or at [email protected]. Key contacts and major contributors to this report are listed in appendix IX. Our objectives were to review the Corporation’s activities and obtain the views of public television station officials regarding: (1) the statutory allocations for federal funding of public television; (2) the distribution of funds by the Corporation through its Community Service Grant and Television Future Fund programs, including a legal analysis of whether the funding of the Television Future Fund program is consistent with the Corporation’s underlying statutory authority; (3) the distribution of funds by the Corporation for PBS’s National Program Service and for local programming; and (4) Corporation funding to assist public television stations in their transition to digital technologies and services. We also reviewed the statutory and regulatory requirements, system policies and guidance, and licensees’ views on underwriting acknowledgments. To respond to these objectives, we gathered information from a variety of sources, including a survey of all public television licensees receiving funds from the Corporation for Public Broadcasting. To respond to the first and second objectives, we reviewed provisions of the Communications Act, as well as documents and records used by the Corporation to implement and administer programs supporting public television stations. We also interviewed officials of the Corporation, PBS, and the Association of Public Television Stations, a nonprofit organization whose members include nearly all of the licensees of public television stations. To respond to the third objective on Corporation funding for national programming, we reviewed provisions of the Communications Act and documentation on funding for national programming obtained from the Corporation, PBS, and the Independent Television Service, a nonprofit corporation that receives federal support from the Corporation for distribution to independent public television producers. We also interviewed officials from all of these organizations, the Association of Public Television Stations, and two additional distributors of national programming that do not receive funding from the Corporation—American Public Television and the National Educational Telecommunications Association. awards grant funds to public television stations for digital equipment costs. To further our understanding of public television’s progress in the digital transition, we requested and received data from the FCC on its June 2003 survey of public television licensees that are PBS-affiliates in the top 100 television markets. For our objective on underwriting acknowledgments, we reviewed statutory and regulatory documents and interviewed officials of FCC, which enforces acknowledgment requirements, and obtained guidance provided by and interviewed officials of the primary national programming distributors—PBS, American Public Television, and the National Educational Telecommunications Association. We also reviewed the legal opinions of the Corporation’s outside counsels as part of our legal analysis to determine whether the Corporation’s approach to funding the Television Future Fund is consistent with the governing statute. Our legal review focused on the program as it existed prior to the end of fiscal year 2003. We responded to all of these objectives by conducting interviews with 16 licensees of public television stations and deploying a Web-based survey of public television licensees. As the scope of our work was limited to an evaluation of the Corporation’s activities, we only surveyed entities licensed by FCC to operate one or more public television stations that received funds from the Corporation as of mid-August 2003. We identified the population of public television licensees from the Public Broadcasting Directory published by the Corporation and verified this information with a database provided by the Association of Public Television Stations, as well as with FCC’s database of public television licensees. This information included names, addresses, and other contact information of public television licensees, as well as licensee type and size. We acquired data on public television licensee market size and station revenues from an online Station Activities and Benchmarking Survey and Station Grant Making System, both developed by the Corporation and to which all recipients of Corporation grants contribute data. To assess the reliability of this licensee data, we reviewed these documents and discussed the data with knowledgeable agency officials. As a result, we determined that the data were sufficiently reliable for the purposes of this report. We surveyed 178 licensees and subsequently excluded the surveys of two licensees: (1) one licensee who did not meet the aforementioned criteria and (2) another licensee who holds two licenses, but who completed only one survey rather than two. Our resulting population consisted of 176 licensees. To develop our survey, we interviewed officials at the Corporation, PBS, the Association of Public Television Stations, the Independent Television Service, American Public Television, the National Educational Telecommunications Association, and several licensees of public television stations. We also conducted an interview with an official of and obtained documents from Citizens for Independent Public Broadcasting, a national membership organization dedicated to addressing public broadcasting issues. We then conducted pretests with seven public television licensees to help further refine our questions, develop new questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. These pretests were conducted in person and by telephone with licensees of various types, sizes, and regional locations across the country. We began our Web-based survey on August 21, 2003, and included all useable responses received as of September 22, 2003. Log-in information to the Web survey was e-mailed to officials of public television licensees, which included general managers and presidents. We sent two follow-up e-mails, and after the survey was online for 3 weeks, we attempted to contact all those who had not logged into the survey. The Corporation and the Association of Public Television Stations coordinated with us to encourage station licensees to complete the survey. Of the population of 176 public television licenses, we received 149 complete surveys, for an overall response rate of 85 percent. However, the number of responses to individual questions may be fewer than 149, depending upon how many licensees were eligible to or chose to respond to a particular question. to provide a basis for adjusting survey responses. Distributions by type of licensee (community, local authority, state, university) and numbers of stations operated by licensees were not significantly different. Licensees operating large stations were somewhat more likely to respond and those operating smaller stations were somewhat less likely to respond, but the differences were not significant. The following data is used only as background information in the report; therefore, the data was not verified for data reliability purposes: (1) the digital television cost estimate developed by the Corporation and PBS; (2) the sources and percentages of public television revenue provided by the Corporation; (3) the number of Television Future Fund and digital television grants awarded by category; (4) and the distribution of funds by the Corporation for programming. Our review was performed from April 2003 through February 2004 in accordance with generally accepted government auditing standards. On the basis of statutory provisions and the receipt of an annual federal appropriation from the Congress, the Corporation for Public Broadcasting makes an annual Community Service Grant award to each eligible licensee of one or more noncommercial educational public television station(s). Figure 22 summarizes the factors upon which funds are awarded through each of the three component grants of a Community Service Grant. Appendix II Components of the Corporation’s Community Service Grants Nine other eligibility criteria for the base grant are specified by the Corporation, including licensees’ compliance with regulations on equal opportunity employment, Internal Revenue Service requirements, provisions of the Communications Act, and regulations on the use and control of donor names and lists. The Corporation for Public Broadcasting (the Corporation) established the Television Future Fund in 1995 for the purpose of investing in projects that would reduce costs, facilitate collaboration, and increase revenue across the public television system. The Television Future Fund is funded, in part, by monies designated by the Congress to be distributed among public television licensees. As part of our review of the Corporation, we were asked to determine the legality of this funding practice. Specifically, the issue is whether the Corporation may use funds designated by the Congress for distribution among public television licensees to support a competitive grant program, the Television Future Fund program. As explained more fully below, the Corporation’s funding and distribution of grants under the Television Future Fund program are not in accord with the underlying statutory authority under which the Corporation operates. The Congress established the Corporation in 1967 as a nonprofit corporation to facilitate the development of public radio and television broadcasting. 47 U.S.C. §396. To ensure insulation from government control or influence over the expenditure of federal funds, the Congress provides funds directly to the Corporation. Although not a federal agency, the Corporation receives an annual appropriation from the Congress, which is its primary source of funding and is deposited into the Public Broadcasting Fund. The 2004 fiscal year appropriation was $380 million. In turn, the Corporation supports local television and radio stations, programming, and improvements to the public broadcasting system as a whole. According to the Corporation, its support represents approximately 15 percent of public broadcasting’s revenues. Other support for the public broadcasting system comes from such sources as membership, businesses, college and universities, and state and local governments. The Corporation funds more than 350 locally operated public television stations across the country. Prior to the establishment of the Television Future Fund, the Corporation distributed available monies among licensees of public television stations through the Community Service Grant mechanism. Community Service Grants (CSG) are unrestricted general operating grants provided by the Corporation directly to qualified public television stations according to a mathematical formula. As required by 47 U.S.C. §396(k)(6)(B), the Corporation established eligibility criteria and a formula for distributing these funds and has periodically reviewed them in consultation with the public television station community. All qualified licensees receive a CSG, although the amount varies. A full-power station operating under a noncommercial, educational Federal Communications Commission (FCC) license qualifies for a CSG if it meets minimum requirements including a minimum level of nonfederal financial support, a minimum broadcast schedule, and bookkeeping and programming standards. The Corporation established the Television Future Fund in 1995. At that time, the Corporation had growing concerns about declining federal support, as well as diminished revenues from other sources. The Corporation saw a need to establish and maintain a pool of money, aggregating funds from two different sources, to fund projects to address systemwide concerns. According to the 1995 Public Television Issues and Policies Task Force, the Future Fund was established to provide seed capital or short-term financing for projects that can significantly reduce costs, increase efficiency, provide economies of scale, or generate incremental gains in membership, underwriting, or other sources of income; fund station proposals to explore opportunities to achieve new operating efficiencies through collaborative efforts, partnerships, joint operating agreements, consolidations, and other arrangements resulting in significant annual savings; and fund extraordinary efforts and new initiatives to raise nonfederal income, in anticipation of reduced federal funding, with a goal of stimulating an increase in annual nonfederal revenue. needed. Accordingly, the Corporation’s board, after what it terms extensive consultation with the public television station community, approved the funding of the Television Future Fund using monies from the system support and the CSG pools. The Corporation views Television Future Fund awards as a special category of grant that is neither exclusively a CSG grant nor a System Support expenditure. The Corporation notes that while CSGs typically are utilized only as determined by an individual station recipient for its own benefit, Television Future Fund grants can be used as determined or directed by more than one station for the benefit of multiple stations and, potentially, for the benefit of public television as a whole. Under procedures in place prior to the end of fiscal year 2003, the Corporation solicited interest in Future Fund grants by issuing a Request for Proposal (RFP) and would evaluate applicants for grants on the basis of RFP funding criteria. Not all applicants received funding. has made 204 grant awards, of which 39 percent of the grants have gone to stations, 30 percent have gone to stations paired with consultants, and 31 percent have gone to third-party awardees. The nature of the projects funded with Television Future Fund grants has varied greatly and included Web site experiments and marketing projects. The grant amounts have varied from a few thousand dollars to hundreds of thousands of dollars. From its inception, the Corporation always envisioned that monies from two sources—the System Support and CSG pools—would support the Television Future Fund program. We are not aware of any concerns that have been raised about the Corporation’s use of System Support funds to support the Television Future Fund. Because the statute provides that System Support monies may be used, if available funding levels permit, for projects and activities that enhance public broadcasting, the Corporation is clearly permitted to so use such funds. 47 U.S.C. §396(k)(3)(A)(i)(II). The primary question concerning the legality of the Television Future Fund program involves the use of CSG funds. Specifically the issue is whether the Corporation may use CSG funds to support the Television Future Fund, a competitive grant program that awards grants on the basis of selective, project-specific criteria. As explained more fully below, we have determined that the statute does not authorize the Corporation to use these funds in this manner. “The balance of the portion reserved for television stations . . . shall be distributed to licensees and permittees of such stations in accordance with eligibility criteria (which the Corporation shall review periodically in consultation with public . . . television licensees or permittees, or their designated representatives) that promote the public interest in public broadcasting, and on the basis of a formula designed to— i. provide for the financial needs and requirements of stations in relation to the communities and audiences such stations undertake to serve; ii. maintain existing, and stimulate new, sources of nonfederal financial support for stations by providing incentives for increases in such support . . . .” 47 U.S.C. §396(k)(6)(B). (Emphasis added.) The next paragraph of the statute further provides that funds distributed through the above mechanism “may be used at the discretion of the recipient for purposes related primarily to the production or acquisition of programming.” 47 U.S.C. §396(k)(7) (Emphasis added.) In our view, subsection 396(k)(6)(B) does not authorize the Corporation to establish a competitive grant program using project-focused criteria funded in part with CSG funds. Although we often defer to an agency’s interpretation of a statute it is charged to administer, in this instance, the Corporation’s interpretation of its authority under the statute is neither consistent with the statute’s language nor the Congress’s policy choice favoring local, not Corporation, control of the expenditure of CSG funds. Moreover, as implemented by the Corporation, some Television Future Fund grants have been awarded to nonstation entities. This is in direct contravention of paragraph (k)(6)(B) directions that these funds be distributed to eligible licensees and permittees of public television stations. The difference between our view and that of the Corporation focuses on whether the “eligibility criteria” the Corporation may adopt includes project-focused criteria that would govern the competitive award of funds for a particular project or whether the “eligibility criteria” the Corporation may adopt includes only station-based criteria that distinguishes among public television licensees on the basis of such factors as financial needs, audience satisfaction, or fundraising effectiveness. According to Corporation officials, the term “eligibility criteria” is broad enough to allow them, in consultation with the station community, to adopt not only station “qualification” criteria but also “selective” project criteria. We disagree. There are, in our view, several reasons why the Congress did not intend the Corporation’s authority to establish “eligibility criteria,” and the formula under which CSG funds are disbursed, to mean that the Corporation may take a portion of CSG funds, pool them with System Support funds, and use them to make competitive grants only to applicants submitting project proposals acceptable to the Corporation after review and recommendation by an advisory panel. First, the language of subsection 396(k)(6)(B) does not readily support such a reading. Second, the statutory construct governing the Corporation’s distribution of funds indicates that the Congress specifically identified a limited source of funding for Corporation-approved project-specific grants, which by necessary implication is the exclusive source of funding for such grants. And third, the Television Future Fund program runs contrary to the Congress’s expressed policy favoring local, not Corporation, control of the expenditure of these discretionary funds. These reasons for our conclusions are discussed more fully below. permittee of a public television station that is on the air.” 47 U.S.C. §396(k)(6)(B). Second, paragraph (6)(B) directs the balance of the portion reserved for public television stations to “be distributed to licensees and permittees of stations in accordance with eligibility criteria . . . that promote the public interest in public broadcasting.” Id. In addition, the distribution of such balance shall be “on the basis of a formula designed to” honor station-focused considerations such as their “financial needs and requirements . . . in relation to the communities and audiences they serve or the level of, and increases in, nonfederal financial support received by the stations. The point of paragraph (6)(B) is to direct the Corporation’s distribution of CSG funds to the licensees and permittees of public television stations. While paragraph (6)(B) provides only that the “eligibility criteria” are to “promote the public interest in public broadcasting,” the Congress nonetheless directed the distribution of such funds on the basis of a formula with a pronounced focus on station-based considerations. Hence, in the context of paragraph (6)(B)’s distribution mechanism, we believe the phrase “eligibility criteria . . . that promote the public interest in public broadcasting” can best be read to mean criteria focusing on the eligibility of licensees and permittees of public television stations, not project eligibility criteria. activities that will enhance public broadcasting.” Public Telecommunications Act of 1988, Pub. L. No. 100-626, 102 Stat. 3207 (1988). As stated above, by identifying a specific source of funds to be used for project-based grants, the legislative language suggests that other funds would not be used for the same purpose. The legislative history supports the view that the Congress anticipated that these funds would be used for systemwide projects that benefit the public broadcasting community. licensees and the licensees having “discretion” over the use of the funds.The Corporation’s creation of a competitive grant program where it decides not only who receives a grant but also more importantly the specific purposes for which the grant funds can be used alters the fundamental balance of discretion over the use of the funds. Under the Corporation’s process, in effect prior to the end of fiscal year 2003, the Request for Proposal Submission Guidelines and Application (RFP) establishes the funding initiatives that guide awards for project support. However, the Corporation reserves the right to fund “otherwise outstanding proposals based on their individual merits, though they may not necessarily respond to these priorities but demonstrate a clear response to Fund objectives.” Fiscal Year 2002 RFP. By setting forth what recipients could spend funds on, the Corporation transferred discretionary authority from each individual licensee to itself. Faced with the statute’s clear division of roles, the Corporation’s outside counsel attempts to justify first, the Corporation’s use of CSG funds to make project-specific grants and second, that the Television Future Funds grants are not primarily designated for programming. In our view, the outside counsel’s conclusion that the Corporation, in consultation with the stations, “may” spend funds on projects that will be financially beneficial to the stations and that will stimulate nonfederal funding is based on two unsupported assumptions. First, the outside counsel reads paragraph (k)(6)(B) as providing the Corporation with authority to “spend” CSG funds. Second, the outside counsel contends that the goals of the formula design are in essence mandates on how the CSG funds are to be used. We see no support for either proposition. Paragraph (k)(6)(B) directs the Corporation on how CSG funds are to be distributed not on how they are to be spent. The goals of the formula design also provide guidance on what criteria the Corporation should consider in distributing funds, but does not constrain a recipient’s use of CSG funds. Moreover, although the Corporation’s outside counsel reads paragraph (k)(7) in terms of its permissive direction, this does not recognize that the subsection emphasizes the discretion of the recipient to use CSG funds for purposes related primarily to programming, i.e., for purposes chosen by the recipient. (Emphasis added). any person, foundation, institution, partnership, corporation, or other business whose project is expressly intended to benefit public television.” Fiscal Year 2002 RFP. Thus, some CSG funds have been awarded to entities other than licensees or permittees of public television stations. According to the Corporation, so long as the purpose of the grants is to benefit public television stations, the award of grants to consultants or other third-party entities is consistent with the statute. Since consultants and stations often work together to generate project proposals that are reviewed by a panel representing a diverse group of stations, the Corporation’s outside counsel concludes that the statutory purposes are being fulfilled, regardless of whose name appears as payee on the Corporation check. Letter from Stephen A. Weiswasser, July 9, 2003. The difficulty with this approach is that paragraph (6)(B) directs the Corporation to distribute the balance of funds reserved for television stations, after deduction of the basic grant, “to licensees of such stations.” 47 U.S.C. §396(k)(3)(A)(ii)(I) (Seventy-five percent of 75 percent remaining after deduction of administrative and system support funds “shall be available for distribution among the licensees and permittees of public television stations pursuant to paragraph (6)(B).”) Accordingly, in our view, the Corporation may not distribute CSG funds to a nonstation entity (other than one acting as the agent for a station or group of stations). For the reasons noted above, we find that the Corporation’s funding and distribution of the Television Future Fund program is not consistent with the underlying statutory authority under which the Corporation operates. Licensees and others in the public television community maintain that the ability of licensees to provide a full range of digital services depends, in part, on regulatory issues related to digital carriage by cable and satellite system providers. Many in the public television community believe that how mandatory carriage obligations are applied to their digital signal is at the heart of public television’s future. Cable systems are required to carry local noncommercial educational television stations based upon a cable system’s number of usable activated channels. Satellite carriers are required to carry all nonduplicative noncommercial educational television stations in markets where they provide local-into-local service. These mandatory carriage requirements are often referred to as “must carry” obligations. There are two key issues on how to apply the mandatory carriage obligations in the digital arena that are of importance to the public television community. The first is whether the “must carry” requirements apply to both the digital and analog signal during the transition period. In other words, would a cable provider be required to carry both the analog and digital signal until the analog spectrum is returned. In a January 2001 Order concerning the carriage of digital television broadcast signals by cable operators, FCC tentatively concluded, based on the existing record evidence, that during the transition, a dual must-carry requirement would burden the cable operator’s First Amendment interests more than is necessary to further the government’s interests. In this regard, the record was found insufficient to demonstrate the degree of harm that broadcasters, including public television stations, would suffer without carriage of both signals. In order to ensure that it had sufficient evidence to fully evaluate this issue, FCC issued a Further Notice of Proposed Rulemaking. operate on a much more flexible basis that could allow for multiple streams, or “multicasting,” of standard definition digital television programs. Under the statute, a cable operator is required to carry in its entirety the “primary video” of the commercial broadcast station.According to FCC, largely parallel provisions are contained in the statute relating to carriage of noncommercial stations. Although FCC recognized that the term “primary video” was susceptible to different interpretations, FCC concluded that, based on the available record, the term “primary video” means a single programming stream and other program-related content. In its Further Notice of Proposed Rulemaking, FCC sought comment on the appropriate parameters for “program-related” in the digital context. FCC also raised questions concerning the applicability of the rules and policies it adopted in the above cited Order to satellite carriers. Public television stations and other broadcasters have asked FCC to reconsider its ruling, and a decision on this request is pending. As our own survey of licensees shows, there is a very strong consensus among licensees that the lack of dual carriage of analog and digital signals by cable companies, as well as a lack of cable carriage of the entire digital over the air stream such as multicast offerings are seen as factors impeding public television’s digital transition. Additionally, there is a strong consensus that lack of carriage of local stations’ digital signals by direct broadcast satellite (e.g., DISH Network, DIRECTV), would produce similarly negative results (see fig. 23). freedom of speech restrictions and a governmental limitation on cable television providers’ right to decide what services they provide. Absent changes to FCC’s ruling on these issues, some in the public television community have taken the position that they “must convince” cable and satellite providers that the digital services offered by public television are valuable additions for their customers and, therefore, should be carried by them. One of the distinguishing features of public television is, by definition, its noncommercial character. Unlike commercial television stations, public television stations are prohibited from airing advertisements. However, public television stations are permitted to acknowledge station supportand, without interrupting regular programming, may acknowledge underwriters on the air. Dating back to the initial decision to reserve spectrum for noncommercial educational broadcast television, FCC rejected proposals to allow noncommercial educational licensees the ability to generate revenues through advertising sales and frequency sharing with commercial broadcasters. In 1981, as part of a “major” reevaluation of the noncommercial educational broadcast service, FCC reaffirmed its rejection of advertising on public television, concluding that advertiser-supported programming of any kind could harm the service. FCC’s 1981 policy statement on the nature of public broadcasting states that the Commission’s interest in creating a noncommercial service in 1951 was to remove the programming decisions of public broadcasters from the normal kinds of market pressures faced by commercial broadcasters. FCC noted, however, that acknowledgments of funders are “proper” and possibly necessary to ensure continued funding from such sources. public broadcasting, and to conduct demonstrations of limited advertising for the purpose of “reduc the uncertainty about the advantages and disadvantages accompanying public broadcast station’s use of limited commercial advertising or expanded underwriting credits.” In its 1983 Report to the Congress, the Temporary Commission concluded that potential revenues from advertising were limited in scope and that the avoidance of significant risks to public broadcasting could not be ensured. In addition, it recommended that the Congress continue to provide federal funding for public broadcasting until or unless adequate alternative financing becomes available. Under current law, the Communications Act defines a “noncommercial educational broadcast station” and “public broadcast station” as a television or radio broadcast station that under the rules and regulations of the Commission in effect on November 2, 1978, is eligible to be licensed as a station that is “owned and operated by a public agency or nonprofit private foundation, corporation or association” or “is owned and operated by a municipality and which transmits only noncommercial programs for educational purposes.” For our purposes here, the act defines “advertisements” as any message or other programming material that is broadcast or otherwise transmitted “in exchange for any remuneration” and is intended to “promote any service, facility, or product” of for-profit entities. As noted above, the act permits public broadcasting stations to provide facilities and services for remuneration so long as those uses do not interfere with stations’ provision of public telecommunications services; the act also prohibits stations from making their facilities “available to any person for the broadcasting of any advertisement.” identification purposes only. Such acknowledgments may not promote the contributors’ products, services, or business, and may not contain comparative or qualitative descriptions, price information, calls to action, or inducements to buy, sell, rent, or lease. No limitation, however, was adopted on the length of acknowledgments. Recognizing that it may be difficult to distinguish between language that “promotes” and language that merely “identifies” an underwriter, broadcasters must make “reasonable good faith judgments” to exclude language or visual elements in their acknowledgments that promote the contributors’ products, services, or business. Consistent with the identification of underwriters, FCC has determined that acknowledgments may include, in addition to the underwriter’s name, the following identifying information: logo-grams or slogans which identify and do not promote, location information and telephone numbers, value neutral descriptions of a product line or service, and/or brand and trade names and product or service listings. According to FCC, enforcement primarily occurs through self-policing by licensees of public television stations and also by the Commission’s response to complaints. For the period from January 2000 through early February 2004, FCC had 43 complaint cases. Thirteen of the complaints were denied or dismissed, 17 complaints resulted in admonishments or cautions, and 2 resulted in notices of apparent liability. Eleven others were under investigation. distributed programs may be identified on air. The acceptance of program funding from third parties, the guidelines state, are intended to ensure that editorial control of programming remains in the hands of program producers, that funding arrangements will not create the perception that editorial control has been exercised by someone other than the producer, and that the noncommercial character of public broadcasting is protected and preserved. PBS guidelines also specify that the maximum duration for all underwriter acknowledgments may not exceed 60 seconds and generally that the maximum duration for a single underwriter not exceed 15 seconds. Other national distributors of public television programming, such as American Public Television and the National Educational Telecommunications Association, also have guidelines with similar acknowledgment length limitations. The PBS Board adopted an exception to its guidelines in February 2003. As modified, the maximum duration for one underwriter may not exceed 30 seconds within a 60-second maximum interval for all acknowledgments. This applies only to underwriters that contribute $2.5 million or more per year for the production of PBS’s prime time programming and the NewsHour with Jim Lehrer. In our survey of licensees, we asked several questions related to the airing of 30-second underwriting acknowledgments by licensees themselves and not those aired as part of PBS programming. The percentage of licensees that said they are currently airing 30-second acknowledgments (41 percent) was equal to the percentage of licensees that said that they neither air, nor plan to air, 30-second underwriting acknowledgments. An additional 9 percent of the licensees responded that they intend to air 30-second acknowledgments in the future. Figure 24 illustrates the responses of licensees to this question. Of the respondents who told us that they are currently airing 30-second acknowledgments, the earliest date provided for the first airing of such acknowledgments was 1982. We also asked licensees who currently air or plan to air 30-second acknowledgments to prioritize the reasons for such decisions. For both groups of licensees, the highest priority identified was to attract new underwriters—56 percent of those that already air 30-second acknowledgments and 69 percent of those that plan to air 30-second acknowledgments. For both groups, maintaining revenues from existing underwriters was the second most frequently identified top priority. Only 5 percent of those that currently air such acknowledgments and 8 percent of those that plan to air such acknowledgments identified increasing revenues from existing underwriters as their highest priority. These responses are illustrated in figure 25. In response to our question as to whether licensees would favor or oppose a federal requirement that limits the length of underwriting acknowledgments, 71 percent said they oppose a requirement, and 22 percent said they favor a federal requirement (see fig. 26). to the length of acknowledgments in order to attract underwriting support and to further the mission of public television. Our survey of public television licensees consisted of objective questions and the option to include narrative comments in each section of the survey. The aggregate results of objective questions are presented below. We received completed surveys from 149 out of 176 licensees—an overall response rate of 85 percent. The number of respondents answering individual questions may be lower, however, depending on the number of licensees who were eligible to answer a particular question or who chose to do so. Each question indicates the number of licensees responding to it. Q1. Do you think the current 75% / 25% allocation of the federal funds supporting public television should remain the same or be changed? Allocation should Allocation should be changed remain the same (percent) (percent) Don't know (percent) Q2. Please provide the reasons for your answer and, if you think the allocation should be changed, describe what the allocation should be. Q3. Were you aware of the consultation process that was conducted in 2001 to review the eligibility criteria for Community Service Grants? I was not associated with a station during the 2001 consultation process (percent) Don't know (percent) Q4. During the 2001 consultation process, to what extent did CPB solicit input from your station(s) on the Community Service Grant eligibility criteria? To a little extent (percent) Not at all (percent) To a little extent (percent) Not at all (percent) Don't know (percent) Q6. To what extent do you think CPB considered input from your station(s) on the Community Service Grant eligibility criteria? To a little extent (percent) Not at all (percent) Don't know (percent) Q7. Overall, are you basically satisfied with the process used by CPB to periodically review the eligibility criteria for Community Service Grants or do you think changes are needed? Substantial changes are needed (percent) Don't know (percent) Q8. Please explain what changes you think are needed. Q9. To what extent do you know about the outcomes or findings of CPB Television Future Fund projects? To a little extent (percent) Not at all (percent) Don't know (percent) Yes (percent) No (percent) Don't know (percent) Q10a. Please describe other ways, if any, you have learned about outcomes or findings of CPB Television Future Fund projects. Q11. Have the outcomes or findings of any CPB Television Future Fund project provided your station(s) with practical methods for either reducing costs or enhancing revenues? No (percent) Don't know (percent) Q11a. If you answered yes to either above, please provide examples or the name(s) of one or more project(s). I prefer using only the System Support account as an alternate approach of funding the Television Future Fund. (percent) I prefer using other sources of funds as an alternate approach of funding the Television Future Fund (please describe below). (percent) CPB should not fund the Television Future Fund. (percent) Don't know (percent) Q13. Please provide the reasons for your answer to Question 12. Q14. To what extent do the children's programs offered by PBS's National Program Service help you to meet the mission of your station(s)? To a little extent (percent) Not at all (percent) Don't know (percent) My station is not a member of PBS (percent) Q15. Please provide the reasons for your answer to Question 14. Q16. To what extent do the prime-time programs offered by PBS's National Program Service help you to meet the mission of your station(s)? To a little extent (percent) Not at all (percent) Don't know (percent) Q18. To what extent do the children's programs offered by PBS's National Program Service help you to build local underwriting and membership support? To a little extent (percent) Not at all (percent) Don't know (percent) Q19. Please provide the reasons for your answer to Question 18. Q20. To what extent do the prime-time programs offered by PBS's National Program Service help you to build local underwriting and membership support? To a little extent (percent) Not at all (percent) Don't know (percent) Q21. Please provide the reasons for your answer to Question 20. Q22. Do you believe that changes are needed to the processes involved in selecting programming for PBS's National Program Service? Don't know (percent) Q24. Should CPB continue to provide direct funding to support the PBS National Program Service (as it exists today)? Don't know (percent) Q25. Please provide the reasons for your answer to Question 24. Q26. Is the amount of local programming that you produce sufficient to meet the needs of your community? No, the amount of local programming is sufficient to meet programming is not sufficient to the needs of our community. meet the needs of our community. (percent) (percent) Don't know (percent) Q27. Please provide the reasons for your answer to Question 26. Q28. In addition to CPB's current statutory authority to support the production of national programming, should CPB have explicit statutory authority to award station grants for the production of local programming? No (percent) Don't know (percent) Q29. Assuming CPB's statutory authority to award station grants for local programming would require the use of funds that currently support national programming, would you still favor this authority? No (percent) Don't know (percent) Q31. In addition to or in conjunction with television broadcasting, do you currently provide each of the following local services to your community? Yes (percent) No (percent) Don't know (percent) a. Services to support pre-school through 12th grade education c. Services to support workforce training, professional development, and/or continuing education d. Television program-related outreach (e.g., additional program-related material on station's own website, sponsoring workshops and discussion groups about programs, community partnerships, PBS toolkits) e. Services to support local, state, and/or federal government agencies (e.g. National Weather Service, Homeland Security) Q31a. Please describe other services, if any, you provide to your community in addition to or in conjunction with television broadcasting. Q32. What types of services does (at least one of) your station(s) currently provide, or plan to provide after transitioning to digital? Currently provide (percent) Don't provide Plan to and don't plan to provide (percent) provide (percent) Don't know (percent) Q33. Do you currently provide, or are you likely to provide after transitioning to digital, revenue-generating ancillary and supplementary non-broadcast services to nonprofit entities? Don't know (percent) Q34. Do you currently provide, or are you likely to provide after transitioning to digital, revenue-generating ancillary and supplementary non-broadcast services to for-profit entities? Don't know (percent) Q35. Were you aware of the consultation process conducted by CPB on the allocation of fiscal year 2003 digital television funding? Don't know (percent) Q36. To what extent did CPB solicit input from your station(s) on the allocation of fiscal year 2003 digital television funding? To a little extent (percent) Not at all (percent) Don't know (percent) Q37. To what extent did your station(s) provide CPB with input on the allocation of fiscal year 2003 digital television funding? To a little extent (percent) Not at all (percent) Don't know (percent) To a little extent (percent) Not at all (percent) Don't know (percent) Q39. Overall, are you basically satisfied with the consultation process used by CPB to allocate fiscal year 2003 digital television funding? Substantial changes are needed (percent) Q40. Please explain what changes you think are needed. Q41. How would you currently prioritize the use of any additional federal funding to support your station(s) during the digital transition? % Ranking 1 % Ranking 2 % Ranking 3 % Ranking 4 % Ranking 5 (percent) (percent) (percent) (percent) (percent) Q42. Is your digital master control equipment fully compatible with the EIOP (for all of your stations)? No, not fully compatible, and our capabilities materially affected (percent) Don't have will be digital master control equipment (percent) Don't know (percent) No, not fully compatible, and our capabilities will be materially affected (percent) Don't have digital production equipment (percent) Don't know (percent) Q44. Is your digital storage equipment fully compatible with the EIOP (for all of your stations)? No, not fully compatible, and our capabilities will be Don't have materially digital storage equipment affected (percent) (percent) Don't know (percent) Q45. Please use the box below to describe any other comments on the Next Generation Interconnection System or the Enhanced Interconnection Optimization Project. Q46. To what extent will completion of the digital transition improve the ability of your station(s) to provide local services to your community? To a little extent (percent) Not at all (percent) Don't know (percent) Q47. Please describe how the ability of your station(s) to provide local services will or will not improve with the digital transition. Yes (percent) No (percent) Don't know (percent) Q48a. Please list other digital carriage issues, if any, that will impede your station's future if not resolved during the digital transition. Q49. Aside from acknowledgements included as part of PBS's National Program Service, do you currently run or plan to run 30-second underwriter acknowledgements on your station(s)? Yes, I plan to run 30-second underwriter acknowledgements acknowledgements (percent) (percent) No, I do not run and do not plan to run 30-second underwriter acknowledgments (percent) Don't know (percent) Q50. In what year did you begin to run 30-second underwriter acknowledgements? (Enter a 4 digit number only. Letters and symbols will be deleted.) % Ranking 1 % Ranking 2 % Ranking 3 (percent) (percent) (percent) Q52. How would you prioritize your reasons for your plans to run 30-second underwriter acknowledgements? % Ranking 1 % Ranking 2 % Ranking 3 (percent) (percent) (percent) Q53. Would you favor or oppose a federal requirement that limits the length of underwriter acknowledgements? Oppose a federal requirement that limits the length of underwriter acknowledgements acknowledgements (percent) (percent) Don't know (percent) Q54. Please provide the reasons for your answer to Question 53. The following are GAO’s comments on the Corporation for Public Broadcasting’s letter dated March 12, 2004. 1. Our legal opinion on this issue remains unchanged. See our comments below on the attached legal memorandum from Covington and Burling. The Corporation notes that its ability to support projects designed to improve the system as a whole could decrease if it had to depend only on system support funds. We recognize the Corporation’s concern. However, we continue to believe that this is a matter that should be addressed to the Congress. 2. The point of the cited paragraph of our report is limited to historical background and is not a characterization of congressional commitment to public television. To restate, when the Public Broadcasting Act of 1967 was passed, annual congressional appropriations were seen as a temporary measure pending the development and adoption of a long term financing plan for public broadcasting. Absent the development of such a plan, the Congress has in fact continued to support public broadcasting with annual appropriations at the levels indicated in figure 3. We agree with the Corporation that when the Congress deferred the development of a long-term financing plan at the time the 1967 act was passed, it did not intend that federal funding for the Corporation would be discontinued. Congressional committee reports accompanying the 1967 legislation and subsequent reauthorization legislation suggest the need for ongoing federal funding to enable the Corporation to fulfill its mission. 3. We do not agree with the Corporation that our report implies that its policy decisions should be made on the basis of our survey of licensees. Although we recognize that the views of licensees are, by statute and in practice, central to the making of policy decisions by the Corporation, the survey served as only one source of evidence for our review. We determined that it was important to ascertain the views of licensees because we believe they are integral to the discussion of the statutory framework for federal support of public television and the Corporation’s funding programs and processes. The findings, conclusions, and recommendations in this report are based on several methodologies we employed to review the Corporation’s activities in support of public television (as described in app. I) including, but not limited to, the survey of public television licensees. President, General Counsel and Corporate Secretary of the Corporation for Public Broadcasting to Mindi Weisenbloom, Senior Attorney, General Accounting Office, dated August 11, 2003. 6. Our review did not examine whether the make-up of the Television Future Fund advisory panels have adequately represented a cross section of the public broadcasting community. We note that the Corporation intends to change the composition of the advisory panel to ensure a greater representation from across the station community. It also appears that the Corporation envisions that the panel will operate more as an investment board than as a consultation panel. Although the Corporation contends that the Future Fund plan has been regularly placed before the constituent elements of public broadcasting, our survey of public television licensees indicates a number of concerns about the program. For example, 42 percent of the respondents to our survey indicated that they had little or no knowledge about the findings and outcomes of Television Future Fund projects. Overall, only 41 percent of licensees responding to our survey indicated that the projects had provided them with practical methods for reducing costs and/or enhancing revenues. The Corporation’s approach for funding the Television Future Fund program was another area identified in our survey as a concern for licensees. Only 30 percent of the responding licensees indicated that they favored the current funding approach, and one-fifth of our survey respondents indicated that the Corporation should cease all funding for the program. 7. We agree that nothing in the statute suggests that the Corporation’s role is passive. Section 396(k)(6)(B) provides the Corporation with discretion to establish eligibility criteria and a formula for the distribution of funds reserved by the Congress for public television to the licensees. However, this discretion must be exercised within the constraints of the provision. The Corporation must periodically review its eligibility criteria with the station community, and the formula must be designed to provide for the financial needs and requirements of stations and to maintain existing, and stimulate new, sources of nonfederal financial support. More importantly, the provision provides that the funds are to be distributed to licensees. Thus, under the plain meaning of the provision, these funds are not available for the Corporation’s use or for the Corporation to decide how the licensees may use the funds. Nor are the funds available for distribution to entities other than the licensees themselves. 8. The statute specifies that it is the recipients of the funds, in other words the public television licensees, who have discretion over the use of these funds. Specifically, section 396(k)(7) provides that these funds “may be used at the discretion of the recipient for purposes related primarily to the production or acquisition of programming.” 9. GAO is not suggesting that the Corporation “pick and choose” stations for grants. Rather, under the plain meaning of section 396(k)(6)(B), the Corporation is to distribute the funds reserved to television stations on the basis of eligibility criteria and a formula. And under the plain meaning of section 396(k)(7), it is the licensees who have the discretion over the use of these funds within the constraints of the statute. The Congress has directed that the 396(k)(6)(B) funds be used “for purposes related primarily to the production or acquisition of programming.” 10. We disagree that the Congress has ratified the Corporation’s use of section 396(k)(6)(B) funds for the purposes of the Future Fund by continuing to make funds available for distribution under section 396(k)(6)(B). “Ratification by appropriation” is the doctrine by which the Congress can, by the appropriation of funds, confer legitimacy on any agency action that was questionable when it was taken. However, this doctrine is not favored and will not be accepted where prior knowledge of the specific disputed action cannot be demonstrated clearly. GAO summarized the test courts have used to find ratification by appropriation in B-285725, September 29, 2000. “To conclude that Congress through the appropriations process has ratified agency action, three factors generally must be present. First, the agency takes the action pursuant to at least arguable authority; second, the Congress has specific knowledge of the facts; and third, the appropriation of funds clearly bestows the claimed authority.” All three elements are missing here. The Corporation does not have the authority to use funds designated for distribution to public television licensees to support the Future Fund. The Congress has not clearly been informed that the Future Fund is supported in part with section 396(k)(6)(B) funds. Finally, the Congress has not in any way indicated that the funds it has provided to the Corporation for public television licensees may be used to support the Television Future Fund. Accordingly, “ratification by appropriation” is not applicable in this instance. exercise their discretion over the use of their funds to contribute to such efforts. 14. As stated in the report, under the plain meaning of the statute, section 396(k)(6)(B) directs the Corporation to distribute the balance of funds reserved for television stations, after deduction of the basic grant, “to licensees of such stations.” Thus, the Corporation does not have the discretion to distribute these funds to other than public television licensees even if the purpose of the grant is to ultimately benefit public television stations. The following are GAO’s comments on the Public Boardcasting Service’s letter dated March 15, 2004. We have edited language in the report to clarify that the funds needed to complete the Enhanced Interconnection Optimization Project of $12 million to $15 million are separate from those to purchase the Next Generation Interconnection System. In addition to those named above, Dennis Amari, Alan Belkin, Edda Emmanuelli-Perez, Colin Fallon, Michele Fejfar, Kevin Heinz, Logan Kleier, Randall Lennox, Omari Norman, Tina Sherman, Mindi Weisenbloom, and Alwynne Wilbur made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
|
For fiscal year 2002 (the most recent data), the Corporation for Public Broadcasting provided about 16 percent of public television's revenues of $1.63 billion. GAO agreed to review the statutory allocations for federal funding of public television, the Corporation's distribution of funds through its Community Service Grant and Television Future Fund programs, its distribution of funds for the Public Broadcasting Service's National Program Service and for local programming, and its grant programs for assisting public television's transition to digital technologies and services. By statute, 75 percent of the Corporation's annual federal funding for public television is to be distributed among licensees of public television stations, and 25 percent is to be available to the Corporation for the support of national public television programming. In our survey of all 176 licensees, of which 85 percent responded, more than three-fifths favored maintaining the current allocations. Of those favoring a change, most proposed an increase in the allocation for distribution among licensees. The Corporation uses Community Service Grants as the primary means of distributing funding to licensees. Most licensees were generally satisfied with the recent consultation process for reviewing the eligibility criteria for these grants. Another program, the Television Future Fund, awarded grants to projects designed to reduce licensees' operational costs and enhance revenues. Only about 40 percent of the licensees indicated that these projects had resulted in practical methods to help their stations, and only about 30 percent agreed with the Corporation's approach of using funds designated for distribution among licensees to partly support these projects. In our legal view, the use of such funds for this purpose is not consistent with the statutory authority under which the Corporation operates. The Corporation provides an annual grant to the Public Broadcasting Service to help fund a package of children's and prime-time programming that make up the National Program Service. Most licensees favored continuation of the Corporation's funding, noting that this national programming helps them meet their educational and cultural missions and build community support for their stations. Licensees also indicated that local programming is important in serving their communities. However, most responded that they do not produce enough local programs to meet their communities' needs, and many cited a lack of funds as the reason. About 85 percent of the licensees responding to our survey indicated that the congressionally mandated transition from analog to digital broadcasting will improve their ability to provide local services to their communities. The Corporation has received appropriations to help support this transition since fiscal year 2001. In consultation with licensees, the Corporation has used these funds to provide licensees with grants for acquiring digital transmission equipment. Some grantees, however, did not receive their awards in time to meet FCC deadlines for the construction of digital transmission facilities. In addition, the Corporation received only a few grant applications during the latter part of 2003. Our survey indicates that most licensees' priorities now involve other aspects of the transition, some of which (including digital production equipment and development of digital content) were not included in the scope of the grant programs. The Corporation is also seeking funds for digitally based infrastructure improvements for distributing public television programming to stations and is working with public television stakeholders to develop a strategic plan that includes the creation of digital content.
|
In September 2010, allegations surfaced that several servicers’ documents accompanying judicial foreclosures may have been inappropriately signed or notarized. In response to this and other servicing issues, federal banking regulators conducted a coordinated on- site review of 14 of the largest mortgage servicers to evaluate the adequacy of the controls over servicers’ foreclosure processes and assess servicers’ policies and procedures for compliance with applicable federal and state laws. On the basis of their findings, the regulators issued the April 2011 consent orders against these servicers that required In the servicers to conduct the foreclosure review, among other things.January 2013, OCC and the Federal Reserve reached agreements with 11 of the 14 mortgage servicing companies subject to the April 2011 consent orders to discontinue the foreclosure reviews and to provide approximately $3.4 billion in direct payments to eligible borrowers. These agreements were formalized in amended consent orders that the regulators released in late February 2013. As shown in table 1, with this change from the foreclosure review to an agreed-upon payment process, regulators and servicers shifted from identifying the types and extent of harm borrowers may have experienced to instead focus on assigning all eligible borrowers into categories based on objective criteria. In addition, under the amended consent orders, the servicers also will provide approximately $5.4 billion in foreclosure prevention assistance to borrowers, such as loan modifications. Consultants for the servicers that did not reach agreements with the regulators continue their foreclosure review activities. Complexity of the file reviews, overly broad guidance, and limited monitoring for consistency may have impeded the ability of OCC and the Federal Reserve to achieve the goals of the foreclosure review. These goals were to ensure similar results for similarly situated borrowers, identify as many harmed borrowers as possible, and restore public confidence in the mortgage market. According to regulator staff and third- party consultants, coordinating the foreclosure review process was challenging because of the large number of actors and borrowers eligible for review, the size of the loan files, and the scope of the file reviews. In addition, each servicer had a unique process for recording and storing information on borrowers’ loan files, which made defining review parameters and developing a uniform review structure that was appropriate for all consultants challenging. Regulators took a number of oversight steps to address the complexities and challenges, including issuing nearly identical sections of the consent orders outlining the purpose of the foreclosure reviews, providing third- party consultants with guidance to help frame the file review process, and implementing regular communication mechanisms among the key actors to help foster consistency in the reviews. However, broad guidance and limited monitoring for consistency reduced the potential usefulness of information being collected and increased risks of inconsistency. According to third-party consultants, regulators’ guidance did not address certain aspects of the foreclosure review, and consultants had to use additional judgment and interpretation when applying certain guidance, increasing the risk of inconsistency among review results. Third-party consultants and their respective law firms we interviewed said that they each developed their own test questions based on analyses of state foreclosure laws, loan modification guidelines, and bank policies, among other references. According to OCC staff, the state law references were fairly straightforward and they had confidence that the consultants and law firms would provide fairly consistent interpretations. However, according to third-party consultants and law firms we interviewed, compiling these references and using them to develop review questions was challenging and time consuming and, in some cases, required judgment or interpretation of the laws or guidelines. See GAO/GGD/AIMD-99-69 and GAO/GGD-96-118. needing to re-do file reviews, which would have led to delays in remediation. Other guidance issued by regulators did not specify key sampling parameters for the file reviews, and regulators lacked objective monitoring measures, resulting in difficulty assessing the extent of borrower harm. For example, our analysis of the May 2011 guidance on sampling found that the guidance was ambiguous about a key sampling parameter that resulted in variations in sample sizes used by the consultants and led consultants to use different triggers to determine when to conduct additional analysis. This ambiguity could have produced inconsistent results for similarly situated borrowers. According to OCC staff, they recognized that some consultants had not fully implemented the sampling approach as expected, and OCC is taking steps to address these differences for one of the servicers continuing the foreclosure review. In addition, our analysis found that the May 2011 guidance did not include a discussion of regulators’ expectations for reporting on sampling, and variations among the sampling plans would have limited the types of information that regulators could report. Finally, the regulators’ sampling approach did not include key oversight mechanisms to facilitate assessment of whether consultants’ reviews were sufficient to realize the goal of identifying as many harmed borrowers as possible, except in those cases where there were few or no errors. The OMB standards for statistical surveys state that where sampling is used, it should include protocols to monitor activities and provide information on the quality of the analyzed data. Good planning and objective data collection provide a basis for making sound conclusions. In the absence of objective measures to compare review methods among consultants or assess sampling, regulators did not have an early warning mechanism to help identify problem areas that may have hindered achievement of the foreclosure review goals. OCC and the Federal Reserve acknowledged the importance of transparency in the foreclosure review process and publicly released more information than is typically disclosed in connection with a consent order. For example, regulators released redacted engagement letters between servicers and third-party consultants and the remediation framework for consultants to use that provided examples of situations in which compensation or other remediation is required for financial injury due to servicer errors, misrepresentations, or other deficiencies. However, the absence of useful and timely communications at certain stages of the process—for the general public as well as individual borrowers—hindered transparency and public confidence in the processes and results. Some stakeholders perceived gaps in key information about how the file reviews were being conducted. Regulators did not release any additional guidance documents, nor did they publicly disclose consultants’ test questions. To increase the transparency and credibility of the foreclosure review, consumer groups recommended that regulators release such information. According to consumer groups, without such information, the public would have questions and doubts about how the reviews were being executed. OCC and the Federal Reserve staff said that they considered releasing additional guidance to the public, but both expressed concerns that releasing detailed information risked disclosure of confidential or proprietary information. Moreover, test questions developed by consultants were numerous and complex, and Federal Reserve staff stated that review processes were too dissimilar to provide a comprehensive summary. Borrowers who requested reviews under the foreclosure review process initially received limited information about the status of their individual file review. Borrowers received a letter acknowledging their request was received, but some did not receive updates until almost a year after the outreach program was first launched, when they received a letter informing them of the continuing nature of the review. In letters to OCC and the Federal Reserve, consumer groups indicated that these borrowers were frustrated by the lack of information on their particular file review. Regulators indicated that additional status letters and information would be sent to borrowers with outstanding requests-for-review. However, regulators were still uncertain about specific information they would require servicers to share with both borrowers who would receive remediation and those who would not. Regulators have acknowledged the importance of transparency, but after announcing the agreements that led to the amended consent orders, they had not yet determined what information to convey beyond that which was included in their press releases and public websites and whether additional information would be provided to borrowers who submitted a request-for-review. During the foreclosure review process, OCC released two interim reports that provided the public with information on the organization and conduct of the file review process and preliminary results, such as the number of requests-for-review received, for institutions it supervises. These reports, according to OCC, were intended to build transparency into the process. The Federal Reserve did not issue interim reports on the foreclosure review process for institutions it supervised. According to Federal Reserve staff, they did not do so because their public release of servicers’ action plans provided sufficient information about how servicers were addressing the requirements of the consent orders and their public release of servicers’ engagement letters provided sufficient information about how the foreclosure review would be conducted. Prior to the announcement of the agreements that led to the amended consent orders and ended the foreclosure review for most servicers, OCC staff told us they had planned to release a final report on the results of the foreclosure review, and Federal Reserve staff indicated they expect to publish additional relevant information related to the foreclosure review and the agreements. However, as of February 2013, regulators had not decided what information on the work conducted under the foreclosure review prior to the agreements will be made available. The foreclosure review revealed three key lessons related to planning, monitoring, and communication that could help inform regulators’ implementation of the amended consent orders and the continuing foreclosure reviews. These key lessons could help contribute to an effective process for distributing direct payments and other assistance as prescribed by the amended consent orders. Based on the foreclosure review experience, we found that (1) designing project features during the process’s initial stages influences the efficiency of file reviews, (2) monitoring progress helps ensure achievement of goals, and (3) promoting transparency enhances public confidence. Our prior work shows that assessing and using lessons learned from previous experiences can provide a powerful method of ensuring that beneficial information is factored into the planning and work processes of future activities. Key practices of assessing lessons learned include collecting and analyzing information on prior activities and applying that information to future activities. Assessing lessons learned by using project critiques and discussions with key participants and stakeholders—such as local examination team staff, third-party consultants and law firms, and external groups—could identify the root causes of strengths and weaknesses of the foreclosure review that could apply to the amended consent order activities. The foreclosure review experience suggests that a planning process to determine key project features, such as guidance and necessary data elements, for activities conducted under the amended consent orders could lessen the risk of changes to planned activities, future delays, or rework. Our work on designing evaluations, including financial audits, has found that systematic and comprehensive planning enhances the quality, credibility, and usefulness of the results and contributes to a more effective use of time and resources. As regulators prepare to implement the amended consent orders, they risk having to make changes in the planned activities or publicly announced timelines if they miss opportunities to make key project planning decisions, including issuing clear guidance. The foreclosure review experience also suggests that using mechanisms to monitor the amended consent order activities and the continuing foreclosure reviews may help ensure achievement of goals. The regulators’ process for monitoring the activities of third-party consultants, servicers, and examination teams during the foreclosure review process could provide a useful model for monitoring activities under the amended consent orders. In addition, regulators’ experience with the foreclosure review suggests that identifying comparative oversight mechanisms to centrally promote consistency and monitor activities under the amended consent orders could help achieve consistent results for borrowers. GAO’s internal control standards state that agencies should take steps to comprehensively identify and analyze program operations to determine if risks exist to achieving goals—such as risks to the regulators’ goal of providing similar results for similarly situated borrowers. In our prior work, we found that using a horizontal review mechanism is an option to help mitigate risks of inconsistent results for activities conducted by Using mechanisms to multiple entities, such as multiple servicers.centrally monitor the consistency of servicers’ activities under the amended consent orders may lessen the risk of inconsistent results or delays in providing direct payments to borrowers. Similarly, monitoring potential inconsistencies for the servicers that are continuing the foreclosure reviews will provide regulators with information to assess whether there is a risk of those borrowers being treated inconsistently. Finally, lessons from the foreclosure review activities conducted to date suggest that developing and implementing an effective communication strategy that includes public reporting goals could enhance the transparency of the activities under the amended consent orders. GAO’s internal control standards emphasize the importance of relevant, reliable, and timely communications both within an organization and with external stakeholders. In addition, our work on the Troubled Asset Relief Program (TARP) has underscored the importance of a communication strategy to strengthen communication with external stakeholders and improve transparency and accountability. Experiences with current government initiatives that are aimed at assisting struggling homeowners and involve institutions and mortgage-related issues similar to those of the foreclosure review highlight the benefits of regular performance reporting. Specifically, periodic reports on the performance of and participation in TARP programs and scheduled reports on servicers’ compliance with requirements of the National Mortgage Settlement are intended to promote transparency and build public confidence. Like TARP and the National Mortgage Settlement, the foreclosure review and the subsequent activities under the amended consent orders are part of the larger governmental response to the housing and mortgage crises. As a result, a communication strategy which incorporates plans for periodic public reporting may enhance transparency in the distribution of direct payments and other assistance and help restore confidence in mortgage markets. Regulators announced the agreements that led to the amended consent orders without a clear communication strategy. As a result, what information will be provided to individual borrowers and the general public about processes, progress, and results of activities under the amended consent orders is unclear. OCC and the Federal Reserve have provided some information on the amended consent orders, and planned to release additional information, such as details on payment categories that were publicly released in April 2013. However, we found that as of March 2013, regulators had not made key decisions on communicating directly with individual borrowers and the extent to which they would report on activities related to the amended consent orders and continuing foreclosure reviews. While the amended consent orders terminate the foreclosure review for most of the servicers, transparency of past and current efforts continues to be important to stakeholders, including Congress and consumer groups. In the absence of a clear communication strategy to direct external communications, including public reporting and direct communication with individual borrowers, regulators face risks to transparency and public confidence similar to those experienced in the foreclosure review process. In our March 2013 report, we recommended that OCC and the Federal Reserve improve oversight of sampling and identify and apply lessons from the foreclosure review process, such as enhancing planning and monitoring activities, to better ensure that the goals of the foreclosure review and amended consent orders are realized. In addition, to better ensure transparency, we recommended that OCC and the Federal Reserve develop and implement a communication strategy to regularly inform borrowers and the public. In commenting on the report, OCC and the Federal Reserve both identified actions that they have taken or planned to take to implement the recommendations. Chairman Menendez, Ranking Member Moran, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Lawrance L. Evans, Jr. at (202) 512-8678 or [email protected]. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this statement. Other staff who made key contributions to this testimony include: John Karikari; Jill Naamane; Anna Maria Ortiz; Karen Tremba (Assistant Directors); Bethany M. Benitez; Charlene J. Lindsay; Patricia MacWilliams; Marc Molino; Robert Rieke; Jennifer Schwartz; Andrew Stavisky; Sonya Vartivarian; James Vitarello; and Monique Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony discusses the Independent Foreclosure Review process. In April 2011, the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Federal Reserve), and the Office of Thrift Supervision (OTS) issued consent orders against 14 mortgage servicers. These orders required the servicers to engage third-party consultants to review servicers' loan files to identify borrowers who had suffered financial harm due to errors, misrepresentations, or other deficiencies in foreclosure processing and recommend remediation for the harms these borrowers suffered. Roughly 4.3 million borrowers who were in some stage of foreclosure in 2009 and 2010 were eligible for the foreclosure review. As of December 2012, consultants had more than 800,000 loans slated for review. In January 2013, the regulators announced agreements that led to amended consent orders with 11 of the 14 servicers to discontinue foreclosure reviews and replace the reviews with a compensation framework that does not rely on determinations of whether borrowers suffered financial harm. The remaining 3 servicers, covering 450,000 borrowers (10 percent), are continuing with the foreclosure review work. The remarks are based on our March 2013 report on the implementation of the foreclosure review and lessons learned that can be applied to the activities required by the amended consent orders and ongoing foreclosure reviews. The statement addresses (1) challenges to the achievement of the goals of the foreclosure review, (2) the extent of transparency in the foreclosure review process, and (3) lessons that could be useful for the activities under the amended consent orders and continuing reviews. As noted in our report, we were in the process of reviewing other aspects of the foreclosure review when OCC and the Federal Reserve announced the agreements. Neither our report nor this statement assesses the regulators' rationale for accepting the agreements nor any trade-offs involved in the regulators' choice to amend the consent orders with the servicers. GAO found the following: Regulators' ability to achieve the goals of the foreclosure review was affected by the complexity of the reviews, as well as by overly broad regulator-issued guidance and limited monitoring for the consistency and sufficiency of consultants' review activities. For example, regulators' statistical sampling approach did not include mechanisms to allow the regulators to monitor consultants' progress toward finding as many harmed borrowers as possible. Our prior work has identified practices, such as assessing progress toward goals and designing monitoring during the planning stage of a project, as effective management practices. In addition, the Office of Management and Budget (OMB) has found that in planning data analysis activities, such as sampling, agencies should take necessary steps to ensure that they have collected the appropriate data from which to draw conclusions. Without using objective measures to compare review methods or assess sampling among consultants, regulators' ability to monitor progress toward achievement of foreclosure review goals was hindered. Although regulators publicly released more information on the foreclosure review process than is typically disclosed in connection with a consent order, the absence of timely and useful communication to the general public and individual borrowers at certain stages of the process impacted transparency and public confidence. To promote transparency, OCC and the Federal Reserve released redacted engagement letters between servicers and consultants, among other documents. However, some stakeholders felt there were gaps in the publicly released information, including the lack of detailed information on how the reviews were to be carried out. In addition, although borrowers who requested reviews under the foreclosure review process received an acknowledgement letter, some borrowers did not receive updates on their request for almost a year after the program was launched. The foreclosure review experience revealed lessons related to planning, monitoring, and communication that could help inform regulators' implementation of the amended consent orders and the remaining foreclosure reviews. In our prior work, we found that assessing lessons learned from previous experiences, such as through discussions with key participants and stakeholders, and applying these lessons can help strengthen future activities. Without assessing and applying relevant lessons learned, regulators might not address similar challenges in activities under the amended consent orders or in the continuing reviews. In particular, regulators announced the agreements that led to the amended consent orders without a clear communication strategy, including determining what information to provide to borrowers. GAO's internal control standards and our work related to best practices indicate that an effective communication strategy and timely reporting can enhance transparency and public confidence. Absent a clear strategy to guide regular communications with individual borrowers and the general public, regulators face risks to transparency and public confidence similar to those experienced in the foreclosure review. Based on our findings, we recommended that OCC and the Federal Reserve improve oversight of sampling and consistency in the continuing reviews; apply lessons in planning and monitoring, as appropriate, to the activities of the amended consent orders and continuing reviews; and implement a communication strategy to keep stakeholders informed. The regulators agreed to take steps to implement these recommendations.
|
The public faces the risk that critical services could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, airline flights grounded, and national defense affected. The many interdependencies that exist among the levels of governments and within key economic sectors of our nation could cause a single failure to have wide-ranging repercussions. While managers in the government and the private sector are acting to mitigate these risks, a significant amount of work remains. The federal government is extremely vulnerable to Year 2000 problems due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations. This challenge is made more difficult by the age and poor documentation of many of the government’s existing systems, and its lackluster track record in modernizing systems to deliver expected improvements and meet promised deadlines. Year 2000-related problems have already occurred. For example, an automated Defense Logistics Agency system erroneously deactivated 90,000 inventoried items as the result of an incorrect date calculation. According to the agency, if the problem had not been corrected (which took 400 work hours), the impact would have seriously hampered its mission to deliver materiel in a timely manner. Our reviews of federal agency Year 2000 programs have found uneven progress, and our reports contain numerous recommendations, which the agencies have almost universally agreed to implement. Among them are the need to establish priorities, solidify data exchange agreements, and develop contingency plans. One of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, with this electronic dependence and massive exchanging of data comes increasing risk that uncorrected Year 2000 problems in other countries will adversely affect the United States. And there are indications of Year 2000 readiness problems internationally. In September 1997 the Gartner Group, a private research firm acknowledged for its expertise in Year 2000 computing issues, surveyed 2,400 companies in 17 countries and concluded that “hirty percent of all companies have not started dealing with the year 2000 problem.” As 2000 approaches, the scope of the risks that the century change could bring has become more clear, and the federal government’s actions have intensified. This past February, an executive order was issued establishing the President’s Council on Year 2000 Conversion. The Council Chair is to oversee federal agency Year 2000 efforts as well as be the spokesman in national and international forums, coordinate with state and local governments, promote appropriate federal roles with respect to private sector activities, and report to the President on a quarterly basis. As we testified in March, there are a number of actions we believe the Council must take to avert this crisis. In a report issued last month, we detailed specific recommendations. The following summarizes a few of the key areas in which we recommend action. Because departments and agencies have taken longer than we and others have recommended to assess the readiness of their systems, it is unlikely that they will be able to renovate and fully test all mission-critical systems by January 1, 2000. Consequently, setting priorities is essential, with the focus being on systems most critical to health and safety, financial well being of individuals, national security, and the economy. Agencies must start business continuity and contingency planning now to safeguard their ability to deliver a minimum acceptable level of services in the event of Year 2000-induced failures. In March we issued an exposure draft of a guide providing information on business continuity and contingency planning issues common to most large enterprises; the Office of Management and Budget (OMB) recently adopted this guide as a model for federal agencies. Agencies developing such plans only for systems currently behind schedule, however, are not addressing the need to ensure business continuity in the event of unforeseen failures. Further, such plans should not be limited to the risks posed by the Year 2000-induced failures of internal information systems, but must include the potential Year 2000 failures of others, including business partners and infrastructure service providers (e.g., power, water, transportation, and voice and data telecommunications). OMB’s assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently verified or independently reviewed. Without such independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. Accordingly, agencies must have independent verification strategies involving inspectors general or other independent organizations. As a nation, we do not know where we stand overall with regard to Year 2000 risks and readiness. No nationwide assessment—including the private and public sectors—has been undertaken to gauge this. In partnership with the private sector and state and local governments, the President’s Council could orchestrate such an assessment. If the systems that support USDA’s various programs cannot operate reliably into the next century, it would not take long for the effects to be felt. USDA’s systems support many vital public health and safety and economic activities and, if not properly fixed, tested, and implemented, severe consequences could result, such as the following. Payments to schools, farmers, and others in rural communities could be delayed or incorrectly computed. The economy could be adversely affected if information critical to crop and livestock providers and investors is unreliable, late, or unavailable. The import and export of foodstuffs could be delayed, thus increasing the likelihood that they will not reach their intended destinations before their spoilage dates. Food distribution to schools and others could be stopped or delayed. Public health and safety could be at risk if equipment used in USDA’s many laboratories to detect bacteria, diseases, and unwholesome foods is not compliant. USDA’s Chief Information Officer (CIO) is responsible for leading the department’s preparation for the Year 2000 date change and ensuring that all critical USDA information systems are Year 2000 compliant and operational. In October 1997 USDA’s CIO established the Year 2000 Program Office under the direction of a Year 2000 Program Executive. This office is responsible for providing oversight and guidance for the department’s Year 2000 program, and serves as USDA’s liaison with other government entities on the Year 2000 issue, such as the CIO Council. Direct accountability for assessing, renovating, validating, and implementing systems conversion, however, rests with USDA’s 31 component agencies, which include staff offices. The Secretary of Agriculture has required each component agency administrator to appoint an executive sponsor specifically accountable for Year 2000 issues, establish technical and program teams, ensure that an action plan is developed, and certify that critical agency systems are reflected in Year 2000 implementation plans. USDA’s component agencies have a great deal of work still to be accomplished in the next 19 months in making its mission-critical systems ready for the year 2000. As figure 1 indicates, for the 10 component agencies in our review, 250 mission-critical systems were initially assessed as compliant. As of this month 132 have been reported as repaired or replaced, while work remains to be completed on 596 mission-critical systems. Looked at another way, about 80 percent of the work remains for these component agency systems. In addition, about 42 percent of the reported 596 mission-critical systems awaiting action are to be replaced. This is cause for some concern, as replacement systems are often a high risk because federal agencies, and USDA in particular, have a long history of difficulty in delivering planned systems on time. Further, some USDA replacement systems are already scheduled to miss the March 1999 implementation deadline established by OMB and are at risk of not being compliant on January 1, 2000. For example: AMS’ planned replacement of its Marketing News Information System—which provides critical market information to producers, processors, and distributors of agricultural commodities throughout the United States—is currently not scheduled to be implemented until August 1999. Further adding to the risk of this tight schedule is the fact that AMS is currently not working on this and three other replacement systems (which are scheduled to be implemented in September 1999), pending approval by the CIO to do so. Although ARS plans to replace its existing Nutrient Data Bank System, it does not yet have a contract in place to develop it. Concerned that it may not meet USDA’s March 1999 deadline, ARS now plans to develop a contingency plan. In April 1998 Forest Service decided to delay agencywide implementation of the Foundation Financial Information System until October 1, 1999, because of significant unresolved issues related to its capabilities. Forest Service has not yet decided what to do about the over 20 existing applications that are scheduled to be replaced by the Foundation Financial Information System. In addition to these risks, we identified two agencies that were inaccurately reporting the number of compliant systems. GIPSA and RMA reported 1 and 14 systems, respectively, as compliant, even though these systems were under development or were planned. The GIPSA Year 2000 Executive Sponsor stated that the GIPSA system was reported as compliant because the system is replacing a manual process. According to the RMA Year 2000 Program Manager, RMA systems were reported as compliant because they were being developed as compliant. We do not agree with GIPSA and RMA. It is misleading to list systems as compliant when work is still to be completed. USDA’s Year 2000 Program Executive stated that he agreed that these systems should not be listed as compliant. At the same time that USDA is facing an enormous challenge to replace, repair, and retire its mission-critical systems, component agencies are beginning to report losses of information technology staff. While USDA has not performed a departmentwide assessment of its Year 2000 technology staffing needs and losses, several component agencies have recently expressed concern that the loss of staff will affect their ability to complete their Year 2000 programs. For example, FSA stated that it had lost 28 of 403 (7 percent) of its information technology staff between October 1997 and April 1998, and Forest Service officials said that they lost 12 information technology staff in the past 5 months. Moreover, in its May 1998 report, Forest Service reported losing contractors to better paying positions. The CIO has taken some action, such as obtaining a waiver from the Office of Personnel Management that allows USDA to rehire former federal personnel without financial penalty. However, according to USDA, this rehiring authority does not cover USDA employees who left the agency under the department’s specific buyout authority. USDA will incur substantial costs to implement its Year 2000 program. It has estimated its Year 2000 costs at $118 million (as of February 1998). However, this estimate does not include all Year 2000-related costs, such as (1) FNS’ share of repairing or replacing the state systems that are used to implement its programs and (2) the cost to renovate or replace telecommunications or vulnerable systems (which USDA defines as embedded systems such as laboratory equipment and facility systems). At the request of the Year 2000 Program Office, some component agencies started reporting these cost estimates and USDA intends to incorporate the costs to renovate or replace telecommunications and vulnerable systems in its next quarterly report to OMB, due May 15, 1998. Although agencies should have completed the assessment phase of Year 2000 readiness last summer, critical assessment tasks for many USDA agencies remain unfinished. Even some basic tasks, such as inventorying systems, have not yet been completed. For example, while some of the component agencies in our review reported having completed inventories of telecommunications and vulnerable systems, most have not. USDA expects these inventories to be completed this July. Table 1 identifies key tasks that should be done during the assessment or renovation phases, yet remain incomplete in many cases. According to our Year 2000 readiness guide, agencies should track their renovation and replacement efforts and use project metrics to manage costs and schedules. Although all of the component agencies we reviewed performed some form of project tracking, many of the component agencies’ Year 2000 program offices did not track baseline to actual completion dates for project milestones, or track the percentage of milestone completion. Also, Forest Service currently performs detailed tracking for only its major applications but plans to perform such tracking for all of its applications in the future. Moreover, while three component agencies tracked actual costs, one did not, and others tracked some costs but not others, such as contractor costs but not staffing. As expressed in our Year 2000 readiness guide, the scope of a component agency’s testing and validation requires careful planning; accordingly, overall testing and validation strategies should initially be developed during the assessment phase. However, eight of the ten component agencies in our review lacked such strategies; only FNS and FSA had them. Moreover, in some agencies—such as NASS and FSIS—only the programmers who made the changes or developed the systems determined the scope of the tests to be completed. In addition, while FNS had a testing strategy, it planned to implement this strategy only for about half of its mission-critical systems; it lacks a testing strategy for the other mission-critical systems. According to an FNS official, the other systems will be tested through a combination of the responsible contractor or FNS staff who made the change and user acceptance testing. One of these systems is vital to ensuring that schools and other entities are reimbursed for providing food services to children and adults. In reviewing the test documentation of systems that were repaired or replaced at FNS and FSA to determine whether their testing strategies were followed for the three systems that these agencies reported as Year 2000 compliant, we found mixed results. The FNS system, called the National Integrated Quality Control System—used by state welfare agencies to perform federally-mandated quality control functions—was not one of the systems covered by FNS’ test strategy, and we were unable to verify whether the system was indeed Year 2000 compliant. The system was replaced by a contractor who conducted limited Year 2000 testing; neither FNS nor the contractor had developed test plans for the system. Further, while FNS utilized its regional offices and nine states for acceptance testing, it did not provide instructions on what to test, and had no documentation concerning exactly what was tested. As a result, FNS officials did not know whether the testing included any Year 2000 test scenarios. Two FSA mission-critical systems had more positive results. Written test plans existed, the testing was carried out by an independent organization, and test result documentation showed that sufficient testing had been performed to determine that the system was Year 2000 compliant. Turning to business continuity and contingency planning, most of the component agencies intended to develop contingency plans only for specific systems or only if the systems were likely to miss the USDA March 1999 deadline for compliance. Agencies that develop contingency plans only for systems currently behind schedule, however, are not addressing the need to ensure the continuity of a minimal level of core business operations in the event of unforeseen failures. As a result, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test effective contingency plans. Contingency plans should be formulated to respond to two types of failures: those that can be predicted (e.g., system renovations that are already far behind schedule) and those that are unforeseen (e.g., a system that fails despite having been certified as Year 2000 compliant or a system that cannot be corrected by January 1, 2000, despite appearing to be on schedule today). Moreover, contingency plans that focus only on agency systems are inadequate. Federal agencies depend on data provided by their business partners as well as on services provided by the public infrastructure (e.g., power, water, transportation, and voice and data telecommunications). One weak link anywhere in the chain of critical dependencies can cause major disruptions to business operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. NASS was the only component agency in our review that had drafted a plan to address the agency’s options in the event that Year 2000-induced failures do not enable it to use its normal processes to develop and issue its January 2000 statistical reports. NASS intends to finalize this plan in the fall of 1998. Given the enormous potential risk, USDA has determined that the Year 2000 crisis is its top information technology priority. It has not, however, translated that sentiment into effective action. The department’s role has remained limited—a condition that cannot continue if sufficient progress is to be achieved. Just as federal departments and agencies establish their own priorities among mission-critical systems, we have recommended that the government as a whole determine national priorities. Similarly, it is important for the Secretary of Agriculture to know, as time dwindles, which mission-critical systems are USDA’s highest priorities. However, USDA’s CIO stated that the department has not set Year 2000 priorities. Priority setting has, rather, been left to the individual component agencies, which determined which systems are mission-critical. The component agencies judged systems to be mission-critical in an inconsistent manner. For example, while Forest Service tells us that it has 17 mission-critical systems, it has reported to the department that it has 423 mission-critical systems. This is because Forest Service reported applications and not systems. Forest Service reports applications rather than systems because it tracks its system migration and Year 2000 project at the application level. Further, not all of these applications are critical. For example, a January 1998 Forest Service analysis of the applications that it plans to repair indicated that only 48 of 137 are critical applications. Another example of USDA’s inconsistent reporting is provided by USDA’s two data centers, the National Information Technology Center (NITC) and the National Finance Center (NFC). NITC reported as mission-critical the systems that support its infrastructure (e.g., operating systems and utilities), while NFC reported its application systems but not the systems that support its infrastructure. We further found that the department’s Year 2000 Program Office and most of the component agencies lacked a key piece of information necessary for setting such priorities: the system’s failure date. This is the first date that a system will fail to recognize and process dates correctly. The oversight provided by the Year 2000 Program Office has been limited to monthly meetings with component agency executive sponsors, regularly scheduled meetings on topics such as telecommunications and reviews of monthly status reports, and written guidance on awareness and assessment. In lieu of developing additional written guidance, the Year 2000 Program Executive stated that he told the component agencies to use our readiness guide. Further, the program office maintains no up-to-date portfolio of components’ mission-critical systems, and has performed only limited analysis of what it does have. For example, in November 1997, the Program Office collected information on the (1) planned completion date of the awareness, assessment, renovation, validation, and implementation dates of systems to be renovated; (2) implementation dates of replacement systems; and (3) planned dates for systems to be retired. This information was updated in February 1998. However, the Year 2000 Program Office did not compare the November 1997 and February 1998 data to determine whether there were any changes that needed to be reviewed. Further, many of the dates in the February 1998 inventory were questionable. For example: In 39 cases, the validation date was before the renovation date. In 40 cases, there were no dates for renovation and/or validation. In 233 cases, the renovation date equaled the validation date. To assist the Year 2000 Program Office in identifying and selecting appropriate courses of action, on April 29, 1998, the program office awarded a contract for a review of its plans, documentation, and products. Among other items, the contractor is to review whether mission-critical systems have been appropriately identified, Year 2000 time frames are realistic, appropriate test plans are being developed and implemented, and the Year 2000 program office is appropriately staffed. In addition, the contractor is to identify Year 2000 testing methodologies and risks, and risk mitigation strategies. These deliverables are expected in about a month. At your request, Mr. Chairman, we also reviewed the Year 2000 readiness of the Farm Credit Administration (FCA) and the Commodity Futures Trading Commission (CFTC), two independent agencies that regulate, respectively, the Farm Credit System and the futures and options industry. FCA and CFTC are concerned not only with the Year 2000 compliance of their internal systems, but also with those of the institutions they regulate. These organizations are heavily dependent on information technology, and Year 2000-induced failures on the part of the industries that FCA and CFTC regulate could have repercussions for the financial services industry and the national economy. FCA regulates, and performs periodic examinations of, the entities that make up the Farm Credit System. The Farm Credit System consists of a network of banks, associations, cooperatives, and other related entities that make short, intermediate, and long-term loans. In addition, FCA oversees the system’s fiscal arm that markets its debt securities and the Federal Agricultural Mortgage Corporation that provides a secondary market for mortgage loans secured by agriculture real estate and rural housing. Its risks associated with the century change are similar to those of other financial institutions: errors in interest calculation and amortization schedules. In addition, the Year 2000 problem may expose the institutions and data centers to financial liability and loss of customer confidence. With respect to their internal systems, FCA identified 25 mission-critical systems of which FCA considers 17 compliant. Of the 8 considered by FCA to be noncompliant systems, 6 are being repaired, 1 is being replaced, and 1 is being retired. Two of the systems being repaired are the responsibility of other entities, USDA’s National Finance Center’s payroll processing system and the Department of the Treasury’s electronic payment system. FCA does not have a written test or validation strategy for any of its internal systems. At the conclusion of our review, FCA officials told us that they plan to develop a written test strategy by the end of this month. To address the Year 2000 readiness of its regulated institutions, FCA has (1) had its institutions provide responses to a Year 2000 questionnaire, (2) conducted reviews of institutions’ Year 2000 programs during its examinations, and (3) issued informational memoranda to the institutions. For example, in November 1997, December 1997, and March 1998, FCA asked its regulated institutions to complete Year 2000 questionnaires. Additionally, in November 1997, FCA issued Year 2000 examination procedures for its examiners. As of March 30, 1998, FCA reported completing 58 safety and soundness examinations that included a review of the institutions’ Year 2000 programs. In addition, 15 examinations were in process and 85 were planned through the end of fiscal year 1998. According to FCA, it will perform targeted Year 2000 examinations by December 30, 1998, for institutions that are not scheduled for a safety and soundness examination in fiscal year 1998. Both the questionnaire and the examination procedures were based on the guidelines developed by the Federal Financial Institutions Examination Council. We have previously reported that the Council’s guidance and procedures were not designed to collect all the data needed to determine where (i.e., in which phases) the institutions are in the Year 2000 correction process. FCA plans to issue this June a more detailed questionnaire requesting more specific information on renovation, testing, and validation. In addition, on March 31, 1998, FCA issued new examination procedures that superseded those of November 1997. FCA has assessed the risks that each of its institutions face based on the responses to the questionnaire, as well as the knowledge of its examiners. Each institution was placed in one of three risk categories—low, moderate, or critical. As of March 31, 1998, 70 institutions were in the low risk category in that the institutions met FCA’s guidelines; 71 were classified as at moderate risk, where some key actions have not been completed or were not consistent with FCA guidelines; and 74 were classified as critical, where actions have not been taken in key areas and there is an increased risk that the institution will not be prepared for the year 2000. The informational memoranda that FCA has issued to the institutions that it regulates covered issues such as testing and establishing a due-diligence process to determine the Year 2000 readiness of service providers and software vendors. However, FCA has not called for the regulated institutions to develop business continuity and contingency plans unless certain deadlines are not met or service providers and software vendors have not provided adequate information about their Year 2000 readiness, or where the provider or vendor solutions do not appear to be viable. As I stated earlier, business continuity and contingency plans should be formulated to respond to those types of failures that can be predicted (e.g., system renovations that are already far behind schedule) and those that are unforeseen (e.g., a system that fails despite having been certified as Year 2000 compliant or a system that cannot be corrected by January 1, 2000, despite appearing to be on schedule today). In response to our review, FCA officials stated that they would issue an information memorandum by the end of May requiring institutions to develop business continuity and contingency plans for all core business processes. CFTC’s mission is to protect market participants from manipulation, fraud, and abusive trade practices, related to the sale of commodity futures and options and to foster open, competitive, and financially sound commodity futures and options markets. CFTC works in conjunction with self-regulatory organizations (SRO), such as the commodity exchanges and independent clearinghouses to regulate these markets. All companies and individuals handling customer funds or providing trading advice must register with the Commission and be a member of at least one of these organizations. SROs audit their member institutions, and CFTC regularly reviews SROs’ audit activities. The SROs and member institutions are, not surprisingly, very reliant on information technology, with many interdependencies among them; these include foreign firms and exchanges as well. Major Year 2000 failures could easily, then, have worldwide economic repercussions. CFTC reports having two mission-critical systems, which it states it repaired to be Year 2000 compliant in 1993 and 1994. It has also inventoried and assessed its external data exchanges, telecommunications, and personal computers; it plans to upgrade its personal computers and network servers next month and replace noncompliant equipment and a noncompliant network operating system by March 1999. Regarding CFTC’s oversight of SROs, on March 18, 1998, CFTC sent a letter to all exchanges and independent clearinghouses requesting information on the Year 2000 readiness status of SRO, SRO’s member firms, and floor brokers and floor traders. In particular, CFTC requested information on (1) contingency plans, both with regard to processes that cannot be made compliant in the necessary time frame and for instances in which, despite the best plans, procedures developed to address the Year 2000 problem do not work, (2) whether and how SRO can ensure full participation in the Year 2000 testing being planned by the Futures Industry Association, and (3) SRO’s authority under its own rules to intervene and, if necessary, restrict or terminate the member’s business, and what procedures would apply. CFTC asked SROs to provide the information by May 15, 1998. CFTC has a coordinator for external Year 2000 activities who will evaluate SRO responses with assistance from CFTC’s Office of Information Resources Management, which is in charge of CFTC’s internal systems, and CFTC’s audit and evaluation group. Although CFTC has not yet reviewed the Year 2000 readiness of the SRO member institutions, it has worked with the SRO audit organization, the Joint Audit Committee. The members of this committee have requested that the registrants for which they are responsible fill out questionnaires on their Year 2000 progress. According to CFTC’s Chief Accountant, CFTC’s auditors will (1) confirm that the SRO auditors had sent the questionnaires to its members, (2) determine whether SRO auditors had reviewed the questionnaires for completeness and unusual items, and (3) determine whether SRO auditors had followed up on any exceptions found. However, because CFTC does not have any electronic data processing auditors, it may have difficulty assessing the SRO’s Year 2000 audit activities. CFTC also issued advisory notices, in November 1997 and April 1998, and has participated in meetings with the Futures Industry Association. The advisory notices asked the SROs to report on their Year 2000 programs, asked the SRO auditors to include a Year 2000 readiness inquiry to their inspections, set disclosure requirements for institutions with Year 2000 problems, and strongly encouraged registrants to share information with SROs and membership organizations. While CFTC has taken some action to address the effect the year 2000 will have on the futures and options markets, the potential major disruption that the year 2000 could hold for these markets suggests that the commission should take a strong leadership role in providing reasonable assurance that the futures and options markets will be Year 2000 compliant in time. In conclusion, the change of century will present many difficult challenges in information technology and in ensuring the continuity of business operations, and has the potential to cause serious disruption to the nation and to government entities on which the public depends, including the Department of Agriculture. These risks can be mitigated and disruptions minimized with proper attention and management. However, much work remains at USDA and its agencies to address these risks and ensure continuity of mission-critical business operations. Continued congressional oversight through hearings such as this can help ensure that this attention is sustained and that appropriate actions are taken to address this crisis. Mr. Chairman, this completes my statement. I would be happy to respond to any questions that you or other members of the Committee may have at this time. Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). IRS’ Year 2000 Efforts: Status and Risks (GAO/T-GGD-98-123, May 7, 1998). Year 2000 Computing Crisis: Potential For Widespread Disruption Calls For Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Defense Computers: Year 2000 Computer Problems Threaten DOD Operations (GAO/AIMD-98-72, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Tax Administration: IRS’ Fiscal Year 1999 Budget Request and Fiscal Year 1998 Filing Season (GAO/T-GGD/AIMD-98-114, March 31, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computers Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed its views on what additional actions must be taken to reduce the nation's year 2000 risks, focusing on: (1) an overview of the potential impact of the century change on the Department of Agriculture's (USDA) mission; (2) how the department is structured to address the crisis; (3) how much work remains to be completed; (4) the efforts of ten of USDA's component agencies and the department as a whole; and (5) the year 2000 status at the Farm Credit Administration (FCA) and the Commodity Futures Trading Commission (CFTC). GAO noted that: (1) the public faces the risk that critical services could be severely disrupted by the year 2000 computing crisis; (2) the federal government is extremely vulnerable to year 2000 problems due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations; (3) USDA's Chief Information Officer is responsible for leading USDA's preparation for the year 2000 date change and ensuring that all critical USDA information systems are year 2000 compliant and operational; (4) direct accountability for assessing, renovating, validating, and implementing systems conversion, however, rests with USDA's 31 component agencies, which include staff offices; (5) USDA's component agencies have a great deal of work still to be accomplished in the next 19 months in making its mission-critical systems ready for the year 2000; (6) although agencies should have completed the assessment phase of year 2000 readiness last summer, critical assessment tasks for many USDA agencies remain unfinished; (7) the component agencies judged systems to be mission-critical in an inconsistent manner; (8) the oversight provided by the USDA's Year 2000 Program Office has been limited to monthly meetings with component agency executive sponsors, regularly scheduled meetings on topics such as telecommunications and reviews of monthly status reports, and written guidance on awareness and assessment; (9) FCA regulates, and performs periodic examinations of, the entities that make up the Farm Credit System; (10) FCA has not called for the regulated institutions to develop business continuity and contingency plans unless certain deadlines are not met or service providers and software vendors have not provided adequate information about their year 2000 readiness, or where the provider or vendor solutions do not appear viable; (11) although CFTC has not yet reviewed the year 2000 readiness of the self-regulatory organization (SRO) member institutions, it has worked with the SRO audit organization; and (12) while CFTC has taken some action to address the effect the year 2000 will have on the futures and options markets, the potential major disruption that the year 2000 could hold for these markets suggests that the commission should take a strong leadership role in providing reasonable assurance that the futures and options markets will be year 2000 compliant in time.
|
Under the IPPS, hospitals are not paid separately for each item or service they provide. Rather, payment is based on the DRG to which the entire inpatient stay is assigned. Each of the 538 DRGs has a classification, that is, an assigned combination of any of the approximately 17,000 diagnoses and procedures codes. When codes for these diagnoses and/or procedures appear together on a claim, the inpatient stay is assigned to the appropriate DRG and paid accordingly. In addition, CMS determines if the inpatient stay is eligible for an outlier payment beyond the DRG payment. Hospitals can receive outlier payments for individual inpatient stays determined to be extremely costly if a hospital can demonstrate that the estimated cost of an individual inpatient stay exceeds a cost threshold established by CMS. Medicare law requires CMS to revise the DRG classifications and payment weights at least annually to reflect changes in treatment patterns, new medical services and technologies, and other factors that may change the relative costliness of an inpatient stay. To accomplish this, CMS assembles a MEDPAR file from inpatient claims for a fiscal year, so that the file contains one record for each inpatient stay provided during that year. A MEDPAR record includes the admission and discharge dates, patient and hospital identifiers, and codes that identify the diagnosis and the procedures delivered during the inpatient stay. The record also contains the hospital’s total charge for the inpatient stay. The total charge represents the charges for all services—including any new technology, drugs, or supplies—provided during the inpatient stay. The total payment to the hospital is also included in the MEDPAR record. MEDPAR records do not indicate the hospital’s actual cost for the inpatient stay or the cost of individual procedures, which are not recorded on claims by hospitals. CMS uses data from the MEDPAR file to revise the DRGs for the coming fiscal year. It revises the DRGs in a two-step process: reclassification of DRGs and calculation of DRG payment weights. First, CMS incorporates new codes into the IPPS that represent new diagnoses or procedures by assigning them to the same DRGs as existing codes for clinically similar diagnoses or procedures. Using data from the MEDPAR file, CMS may reclassify the DRG assignment of inpatient stays with a particular procedure or diagnosis code if it determines the inpatient stays are more similar in their clinical characteristics and costliness to a DRG other than the DRG to which those stays were previously assigned. CMS will create a new DRG if it determines that the inpatient stays involving newly identified diagnoses and procedures cannot be described by any of the existing DRGs. The classification of most DRGs does not change from year to year. The second step in revising the DRGs involves calculating weights across all DRGs, so that the DRGs reflect the expected relative differences in costliness of inpatient stays for the upcoming fiscal year. Prior to fiscal year 2007, CMS annually derived each DRG’s weight by dividing the average charge per inpatient stay for that DRG by the average charge per inpatient stay across all DRGs for a fiscal year. Effective fiscal year 2007, CMS uses charge data from the MEDPAR file and hospitals’ cost-to-charge ratios from Medicare cost reports to estimate the costs per inpatient stay. CMS then uses these average estimated costs to measure the relative costliness of inpatient stays that will be assigned to each DRG. In reclassifying and weighting DRGs, CMS generally requires that the data meet three criteria: (1) the data must be representative of the Medicare population; (2) the data must be timely—that is, they should be the most recent data available; and (3) the data must be complete—meaning that CMS needs total charges or other measure of costliness for all services provided during an inpatient stay. Charge data collected at the inpatient- stay level allow CMS to appropriately measure relative costliness across the DRGs. The DRG classifications and payment weights for any given fiscal year are based on data from the MEDPAR file for inpatient services provided 2 fiscal years prior, and therefore, do not reflect the cost of the most recently used technologies. For example, during the summer of 2006, when CMS was finalizing the DRGs for fiscal year 2007, the most recent data pertained to inpatient services provided through the end of fiscal year 2005, and did not reflect the cost of technologies first adopted by hospitals in fiscal year 2006. The time lag in the data that are used to set DRG classifications and weights is primarily due to two factors in combination: the time it takes to annually finalize the DRGs, and the time it takes for CMS to process each inpatient claim into a MEDPAR record. First, Medicare law requires that DRG classifications and weights be revised annually and published in the Federal Register on or before the August 1 before each fiscal year. Fiscal years begin October 1 and end the following September 30. In order to obtain public input, CMS generally publishes its proposed DRGs for the coming fiscal year in the Federal Register each April and accepts comments for 60 days before publishing the final DRGs by August. The second factor that affects the incorporation of the cost of new technologies into the MEDPAR file involves the time it takes for CMS to process each inpatient claim into a MEDPAR record. Before a record for an inpatient stay can be added to the MEDPAR file, the hospital must submit the claim, a private contractor must process and pay the claim, and CMS must create a MEDPAR record using information on the claim. It takes about 6 months from the time of the inpatient stay to the time the MEDPAR record for that inpatient stay is created. In addition, the MEDPAR record may not be added to the MEDPAR file until as much as 3 months later, since the MEDPAR file is updated quarterly—in December, March, June, and September. This means that MEDPAR records are not available to CMS until 6 to 9 months after the inpatient stay has occurred. (See fig. 1.) Because DRG payments for a given fiscal year are based on claims for inpatient services provided 2 fiscal years prior, Medicare can provide hospitals with add-on payments, in addition to the DRG-based payments, for inpatient stays involving certain new technologies. CMS designates technologies for add-on payments if they meet specified criteria for being new, costly, and a substantial clinical improvement over existing technologies. CMS considers a technology new if no more than 2 to 3 years have passed between the date when the technology was first introduced on the market, as identified by CMS, and the payment year. At the end of this period, CMS assumes the costs for the technology to be fully reflected in the most recent MEDPAR file and supplemental add-on payments are no longer necessary. CMS considers a new technology costly if the average amount charged by hospitals for all inpatient stays involving the technology exceeds a charge threshold or a predetermined amount. CMS considers a new technology a substantial clinical improvement over existing technologies if the technology has one or more unique clinical advantages—for example, the technology diagnoses a medical condition in a patient population where that condition was previously undetectable. Every year, CMS accepts applications from technology manufacturers, hospitals, and other stakeholders, in which they present evidence that certain technologies meet the criteria for add-on payments in the coming fiscal year. When CMS publishes its final DRG classifications and weights, it summarizes each application, and explains why the particular technology was approved or rejected for add-on payments. For fiscal year 2007, CMS approved one new application and continued add-on payments for two technologies approved for fiscal year 2006. As a result, hospitals receive an add-on payment, in addition to a DRG payment, when they submit a claim to Medicare that includes the code for a procedure involving one of those three technologies. The amount of the add-on payment is determined on a claim-by-claim basis; the hospital receives up to half the estimated cost of the technology, depending on the amount by which the total cost of the inpatient stay is estimated by CMS to exceed the DRG-based payment. CMS has used external data for two purposes: to inform DRG reclassification and to evaluate new technology add-on payment applications. To inform DRG reclassification, CMS accepts the submission of external data that are intended to demonstrate that inpatient stays involving a new technology are costlier on average than the other inpatient stays in the same DRG. CMS uses data from the MEDPAR file to validate the external data submitted. Generally, CMS will not make a reclassification decision for a DRG involving a new technology if the technology is so new that it does not appear in the MEDPAR file. To evaluate new technology add-on payment applications, CMS has generally used external data in conjunction with data from the MEDPAR file to evaluate whether a new technology meets one of three eligibility criteria, specifically, the criterion related to cost. CMS officials told us they have used external data to inform the DRG reclassification process. External data are submitted by stakeholders as part of a request to reclassify—from one DRG to another—certain procedure codes involving particular new technologies. Although CMS will accept the submission of external data, it has used data from the MEDPAR file to validate the external data submitted. Specifically, when external data are submitted for a proposed DRG reclassification for a procedure or new technology, CMS’s policy is to find the same or similar evidence in the MEDPAR file. CMS encourages stakeholders to submit their external data for DRG reclassification purposes by the December before the issuance of the proposed rule the following April. Although there is no formal application process to request a DRG reclassification, CMS explained its policy for accepting external data submissions in its July 30, 1999, notice of final rulemaking. It stated that external data submissions must be sufficiently detailed—include applicable hospital and beneficiary identifiers, procedure and diagnosis codes, admission and discharge dates, and total charges for each inpatient stay involving the codes—so that CMS can validate whether the same, or similar, inpatient stays appear in the MEDPAR file. CMS also requires that the external data submitted comprise a complete set, or representative sample, of cases involving the technology. CMS will not reclassify a procedure code from one DRG to another based on the external data submission alone. As a result, CMS generally will not make a DRG reclassification involving a technology that is so new it does not yet appear in the MEDPAR file. CMS has used external data to evaluate applications for new technology add-on payments to better recognize the cost of technologies that are clinically beneficial yet would not be fully reflected in the MEDPAR file. CMS designates technologies for add-on payments if they meet specified criteria for being new, costly, and a substantial clinical improvement over existing technologies. CMS’s use of external data is limited to its evaluation of the cost criterion. CMS has generally used external data and data from the MEDPAR file to evaluate whether a new technology that is being considered for an add-on payment meets the criterion for being considered costly. As of fiscal year 2007, according to our review of CMS regulations and our interviews with CMS officials, CMS has received few applications for add- on payments—a total of 25, which is an average of about 5 per year since fiscal year 2002. All but two applications were submitted by device and drug manufacturers. When CMS receives an application for a new technology add-on payment, it first evaluates whether the technology meets the criterion of being new before it evaluates the technology under the cost and clinical improvement criteria. The majority of new technology add-on payment applications have been rejected because the technology failed to meet the newness criterion. For these applications, CMS did not have to review any information related to the cost and clinical improvement criteria, including external data related to the cost criterion. Of the 25 applications received, CMS evaluated 14 under the cost criterion. Of these 14 technologies, CMS approved 7 for new technology add-on payments. When CMS evaluates new technologies under the cost criterion, it uses external data in conjunction with data from the MEDPAR file to determine whether the technology meets the cost criterion. Table 1 illustrates three hypothetical scenarios in which CMS, during fiscal year 2007, could use external data in conjunction with data from the fiscal year 2006 MEDPAR file in determining if a new technology is eligible for add-on payments for fiscal year 2008 under the cost criterion. Data collected and used by other government agencies have limitations for CMS’s use in setting DRG payments for inpatient stays involving new technologies. This is because, when setting DRG payments, CMS generally needs data that are representative of the Medicare population, timely, and complete in that the data include the total charge or other measure of costliness for all services provided during an inpatient stay, including new technologies. The data we identified from BLS, VA, DOD, and AHRQ were either not representative of the Medicare population, were no timelier than data from the MEDPAR file, or were not complete. BLS collects monthly selling prices for samples of products from three industries that may have data relevant to CMS because these data include price information for new technologies: medical instruments, pharmaceuticals, and biological products. These data, collected from manufacturers, are used to publish the Producer Price Index (PPI), which tracks the inflation of prices by producers of goods and services at the national level. Because BLS cannot obtain pricing for every medical instrument and pharmaceutical and biological product sold, it employs a sampling methodology to track prices. Using probability statistics, BLS selects a sample of products whose price changes over time will be representative of the price changes characteristic of the medical instrument and pharmaceutical and biological product industries. Generally, BLS selects a new sample of products per industry every 7-8 years. The monthly selling prices collected include prices for transactions between manufacturers and hospitals, wholesalers, group purchasing organizations, or other customers. BLS data have a number of limitations that would affect CMS’s use in setting DRG payments. Because the selling prices reflect transactions between manufacturers and a variety of purchasers such as group purchasing organizations as well as hospitals, not all of these prices are directly relevant for setting DRG payments. To set payments, CMS needs data that reflect hospitals serving Medicare beneficiaries. In addition, since BLS relies on a sample of products from each industry, and the sample is generally updated on average every 7-8 years, it is unlikely that BLS will have price data for a new technology that CMS does not already have, or cannot obtain from a manufacturer. Finally, the BLS data lack information needed by CMS on the costliness of inpatient stays involving the technology relative to other inpatient stays; instead, they only include price data for the technology alone. Two types of VA data, price data from the federal supply schedule (FSS) and data on inpatient stays at VA hospitals, also have limitations that would affect CMS’s use in setting DRG payments. VA collects data from drug and device manufacturers on the prices manufacturers charge their Most-Favored Customers (MFC). These data are used to negotiate prices on the FSS, which is a schedule of prices for products used by federal agencies. Prices on the FSS are awarded at equal to or better than the prices manufacturers charge their MFCs. Because all federal agencies and programs may access FSS price information on the Internet, CMS already has access to these prices. Similar to BLS data, FSS data are not complete for CMS’s purposes because they lack information on the costliness of inpatient stays involving the technology relative to other inpatient stays. VA also collects data on inpatient stays at its medical centers. These data are complete for CMS’s purposes in that they include all services provided during inpatient stays and their associated costs, including the cost of any new technologies. However, there are still limitations for CMS’s use of these data in setting DRG payments. First, the costs of providing care at VA medical centers may not be representative of the costs of providing care at hospitals that provide care to Medicare beneficiaries. VA is a provider of services and, as such, VA has the authority to purchase new technologies at discounted rates through various federal purchasing options, such as the FSS. Medicare, on the other hand, is a payer—not a provider—of services and does not purchase drugs and devices for hospitals. Therefore, Medicare does not negotiate discounts on behalf of hospitals providing services to Medicare beneficiaries. Furthermore, VA inpatient stay data are no timelier than MEDPAR data for determining payments to hospitals. For example, VA’s allocation of funding to its medical centers for fiscal year 2007 is based on data spanning fiscal years 2003 through 2005. Medicare used fiscal year 2005 data to develop fiscal year 2007 DRG payments. DOD data also have limitations for CMS’s use in setting DRG payments. DOD health care delivery consists of two integrated systems: the direct care system delivered by DOD hospitals, known as Military Treatment Facilities (MTF), and the civilian system. The latter is coordinated by the TRICARE Management Activity (TMA), which contracts with managed care organizations to deliver care, including inpatient services. Data from the DOD direct care system would not meet CMS’s criterion for completeness for two reasons. First, DOD collects overall cost data at the facility level rather than the inpatient-stay level. CMS needs charge or cost data at the inpatient stay level to set DRG payments. Second, while DOD uses cost and pricing data from a variety of sources when purchasing medical products, such as drugs and devices for its MTFs, these data alone are not appropriate for CMS’s use in setting DRG payments because CMS needs information on the costliness of inpatient stays involving the technology relative to other inpatient stays. Data from the DOD civilian system also have limitations for CMS’s use in setting DRG payments. TMA pays for inpatient stays using a DRG-based payment system that is modeled on the Medicare IPPS. Although TMA’s data would be complete for CMS’s purposes in that the data include total charges for all services provided during an inpatient stay, they would not meet CMS’s criterion for representativeness. According to DOD, its population tends to be younger and healthier and, therefore, not comparable to the Medicare population. AHRQ collects claims data from nearly all nongovernmental acute care hospitals in 38 states and these data represent approximately 90 percent of inpatient stays in the United States. AHRQ partners with state organizations, which collect claims data directly from hospitals; these data are then submitted to AHRQ. According to AHRQ, these data, which are available to researchers through the Healthcare Cost and Utilization Project (HCUP) claims database, are representative of the Medicare population overall. The data from the HCUP database are also complete in that they include charge, diagnosis, and procedure information from Medicare as well as private payers. Although data from the HCUP database would meet CMS’s criteria for being representative of the Medicare population and are complete, these data are less timely than data from the MEDPAR file. AHRQ data lag between 15-18 months, so, for example, if CMS were to use data from the HCUP database to set payments for fiscal year 2007, the latest available data from AHRQ would include inpatient services for calendar year 2004, while the latest available data from the MEDPAR file would include inpatient services from fiscal year 2005. Data from the MEDPAR file remain the primary data source for setting DRG payments because they include all charges from inpatient claims for inpatient services provided to all Medicare beneficiaries across all hospitals paid under the IPPS. CMS needs these data to determine payment for each DRG relative to other DRGs. In instances where data from the MEDPAR file have lacked charge information for certain stays involving new technologies, CMS has used external data to inform the DRG reclassification process and to evaluate new technology add-on payment applications. To set DRG payments, CMS needs data that meet criteria of being representative, timely, and complete. Although BLS, VA, DOD, and AHRQ collect data for their own purposes that could potentially be useful to CMS, these data are limited in their utility to set DRG payments because they do not always meet CMS’s criteria. In commenting on a draft of this report, CMS stated that it agreed with our findings and reiterated its commitment to using external data when appropriate. (See app. I.) DOD said it had no comments on the draft of this report. (See app. II.) We received comments from VA via email. The department agreed with the facts as they pertain to VA. We also sent a draft of this report to DOL. DOL did not provide comments. Representatives from American Hospital Association (AHA), Association of American Medical Colleges (AAMC), and the Biotechnology Industry Organization (BIO) provided oral comments on a draft of this report. They said they agreed with our findings related to the use of external data by CMS. With regard to our finding that data from other government agencies have limitations for CMS’s use in setting DRG payments, both AAMC and BIO said we should have discussed CMS’s use of data from sources other than the federal government. As we discussed in the draft report, CMS has used external data from sources other than the federal government including manufacturer data to inform DRG reclassification and evaluate new technology add-on applications. AAMC said it was concerned that we only examined how CMS used the external data and did not conduct an evaluation of CMS’s policy for using external data. However, as discussed in the draft report, an examination of CMS’s policy for accepting external data was not within the scope of the report. In addition, CMS, AAMC and AHA offered technical comments on the draft of this report, which we incorporated as appropriate. We are sending a copy of this report to the Administrator of CMS and interested congressional committees. We will also provide copies to others on request. The report is available online at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contact above, Maria Martino, Assistant Director; Melanie Anne Egorin; Yorick F. Uzes; and Craig Winslow made key contributions to this report.
|
Under Medicare, hospitals generally receive fixed payments for inpatient stays based on diagnosis-related groups (DRG), a system that classifies stays by patient diagnoses and procedures. The Centers for Medicare & Medicaid Services (CMS) annually uses its own data to reclassify DRGs. CMS also makes add-on payments for stays involving new technologies that meet three eligibility criteria. Stakeholders may submit data that are external to CMS as part of a DRG reclassification request or an add-on payment application. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required GAO to examine whether CMS could improve its use of external data, including using data collected by other government agencies for DRG payments. As discussed with the committees of jurisdiction, GAO examined (1) to what extent CMS has used external data in determining payments for inpatient stays involving new technologies, and (2) to what extent can external data from other government agencies be used by CMS in determining DRG payments for inpatient stays involving new technologies. GAO interviewed officials from CMS and industry stakeholders. GAO interviewed officials from Bureau of Labor Statistics (BLS), Department of Veterans Affairs (VA), Department of Defense (DOD), and Agency for Healthcare Research and Quality (AHRQ) because these agencies may have data useful to CMS. GAO also reviewed regulations and other CMS materials. CMS has used external data for two purposes: to inform DRG reclassification and to evaluate new technology add-on payment applications. To inform DRG reclassification, CMS accepts the submission of external data that are intended to demonstrate that inpatient stays involving a new technology are costlier on average than the other inpatient stays in the same DRG. CMS uses its data from the Medicare Provider Analysis and Review (MEDPAR) file to validate the external data submitted. Specifically, when external data are submitted for a proposed DRG reclassification for a procedure or new technology, CMS's policy is to find the same or similar evidence in the MEDPAR file. Generally, CMS will not make a reclassification decision for a DRG involving a new technology if the technology is so new that it does not appear in the MEDPAR file. To evaluate new technology add-on payment applications, CMS has generally used external data in conjunction with data from the MEDPAR file to evaluate whether a new technology meets one of the three eligibility criteria, specifically the criterion related to cost. Data from other government agencies have limitations for CMS's use in setting DRG payments for inpatient stays involving new technologies. This is because when setting DRG payments, CMS generally needs data that are representative of the Medicare population, timely, and complete in that the data include the total charge or other measure of costliness for all services provided during an inpatient stay, including new technologies. The data we identified from other government agencies were either not representative of the Medicare population, were not timelier than data from the MEDPAR file, or were not complete. Data from the MEDPAR file remain the primary data source for setting DRG payments because they include all charges from paid inpatient claims for inpatient services provided to all Medicare beneficiaries across all hospitals paid under the IPPS. In instances where data from the MEDPAR file have lacked charge information for certain stays involving new technologies, CMS has used external data to inform the DRG reclassification process and to evaluate new technology add-on payment applications. To set DRG payments, CMS needs data that meet criteria of being representative, timely, and complete. Although BLS, VA, DOD, and AHRQ collect data for their own purposes that could potentially be useful to CMS, these data are limited in their utility to set DRG payments because they do not always meet CMS's criteria. In commenting on a draft of this report, CMS stated that it agreed with GAO's findings.
|
Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series— managed by NOAA—and the Defense Meteorological Satellite Program (DMSP)—managed by the Air Force. These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products—including both terrestrial and space weather. These satellite data are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather 3 or more days in advance—including forecasting the path and intensity of hurricanes. The weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate their effects. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies such as climate monitoring. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2026. To manage this program, DOD, NOAA, and NASA formed the tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. The NPOESS program office is overseen by an Executive Committee, which is made up of the Administrators of NOAA and NASA and the Under Secretary of the Air Force. NPOESS is a major system acquisition that was originally estimated to cost about $6.5 billion over the 24-year life of the program from its inception in 1995 through 2018. The program is to provide satellite development, satellite launch and operation, and ground- based satellite data processing. These deliverables are grouped into four main categories: (1) the space segment, which includes the satellites and sensors; (2) the integrated data processing segment, which is the system for transforming raw data into environmental data records (EDR) and is to be located at four data processing centers; (3) the command, control, and communications segment, which includes the equipment and services needed to support satellite operations; and (4) the launch segment, which includes launch vehicle services. When the NPOESS engineering, manufacturing, and development contract was awarded in August 2002, the cost estimate was adjusted to $7 billion. Acquisition plans called for the procurement and launch of six satellites over the life of the program, as well as the integration of 13 instruments—consisting of 10 environmental sensors and 3 subsystems. Together, the sensors were to receive and transmit data on atmospheric, cloud cover, environmental, climatic, oceanographic, and solar-geophysical observations. The subsystems were to support non-environmental search and rescue efforts, sensor survivability, and environmental data collection activities. The program office considered 4 of the sensors to be critical because they provide data for key weather products; these sensors are in bold in table 1, which describes each of the expected NPOESS instruments. In addition, a demonstration satellite, called the NPOESS Preparatory Project (NPP), was planned to be launched several years before the first NPOESS satellite in order to reduce the risk associated with launching new sensor technologies and to ensure continuity of climate data with NASA’s Earth Observing System satellites. NPP was to host three of the four critical NPOESS sensors, as well as one other noncritical sensor and to provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. When the NPOESS development contract was awarded, the schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during the planned launches of these satellites. Early program milestones included (1) launching NPP by May 2006, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. Over several years, we reported that NPOESS had experienced continued cost increases, schedule delays, and serious technical problems. By November 2005, we estimated that the cost of the program had grown from $7 billion to over $10 billion. In addition, the program was experiencing major technical problems with the VIIRS sensor and expected to delay the launch date of the first satellite by almost 2 years. These issues ultimately required difficult decisions to be made about the program’s direction and capabilities. The Nunn-McCurdy law requires DOD to take specific actions when a major defense acquisition program cost growth exceeds certain thresholds. The law requires the Secretary of Defense to notify Congress when a major defense acquisition is expected to overrun its current baseline by 15 percent or more and to certify the current program to Congress when it is expected to overrun its baseline by 25 percent or more. In November 2005, NPOESS exceeded the 25 percent threshold, and DOD was required to certify the program. Certifying a program entails providing a determination that (1) the program is essential to national security, (2) there are no alternatives to the program that will provide equal or greater military capability at less cost, (3) the new estimates of the program’s cost are reasonable, and (4) the management structure for the program is adequate to manage and control costs. DOD established tri-agency teams—made up of DOD, NOAA, and NASA experts—to work on each of the four elements of the certification process. In June 2006, DOD (with the agreement of both of its partner agencies) certified a restructured NPOESS program, estimated to cost $12.5 billion through 2026. This decision approved a cost increase of $4 billion over the prior approved baseline cost and delayed the launch of NPP and the first 2 satellites by roughly 3 to 5 years. The new program also entailed reducing the number of satellites to be produced and launched from 6 to 4, and reducing the number of instruments on the satellites from 13 to 9—consisting of 7 environmental sensors and 2 subsystems. It also entailed using NPOESS satellites in the early morning and afternoon orbits and relying on European satellites for midmorning orbit data. Table 2 summarizes the major program changes made under the Nunn- McCurdy certification decision. The Nunn-McCurdy certification decision established new milestones for the delivery of key program elements, including launching NPP by January 2010, launching the first NPOESS satellite by January 2013, and launching the second NPOESS satellite by January 2016. These revised milestones deviated from prior plans to have the first NPOESS satellite available to back up the final POES satellite should anything go wrong during that launch. Delaying the launch of the first NPOESS satellite meant that if the final POES satellite fails on launch, satellite data users would need to rely on the existing constellation of environmental satellites until NPP data becomes available—almost 2 years later. Although NPP was not intended to be an operational asset, NASA agreed to move NPP to a different orbit so that its data would be available in the event of a premature failure of the final POES satellite. If the health of the existing constellation of satellites diminishes—or if NPP data is not available, timely, and reliable—there could be a gap in environmental satellite data. In order to reduce program complexity, the Nunn-McCurdy certification decision decreased the number of NPOESS sensors from 13 to 9 and reduced the functionality of 4 sensors. Specifically, of the 13 original sensors, 5 sensors remain unchanged (but 2 are on a reduced number of satellites), 3 were replaced with older or less capable sensors, 1 was modified to provide less functionality, and 4 were canceled. The certification decision also made allowances for the reintegration of the cancelled sensors. Specifically, the program was directed to build each NPOESS spacecraft with enough room and power to accommodate the sensors that were removed from the program and to fund the integration and testing of any sensors that are later restored. Agency sponsors external to the program would be responsible for justifying and funding the sensor’s development, while the NPOESS Executive Committee would have the final decision on whether to include the sensor on a specific satellite. Table 3 identifies the changes to the NPOESS instruments. The changes in NPOESS sensors affected the number and quality of the resulting weather and environmental products, called environmental data records (EDR). In selecting sensors for the restructured program during the Nunn-McCurdy process, decision makers placed the highest priority on continuing current operational weather capabilities and a lower priority on obtaining selected environmental and climate measuring capabilities. As a result, the revised NPOESS system has significantly less capability for providing global climate measures than was originally planned. Specifically, the number of EDRs was decreased from 55 to 39, of which 6 are of a reduced quality. The 39 EDRs that remain include cloud base height, land surface temperature, precipitation type and rate, and sea surface winds. The 16 EDRs that were removed include cloud particle size and distribution, sea surface height, net solar radiation at the top of the atmosphere, and products to depict the electric fields in the space environment. The 6 EDRs that are of a reduced quality include ozone profile, soil moisture, and multiple products depicting energy in the space environment. The program office has completed major activities associated with restructuring NPOESS, but key supporting activities remain— including obtaining approval of key acquisition documents—and delays in completing these activities could affect the program’s funding and schedule. Restructuring a major acquisition program like NPOESS is a process that involves reassessing and redefining the program’s deliverables, costs, and schedules, and renegotiating the contract. The restructuring process also involves revising important acquisition documents such as the tri-agency memorandum of agreement, the acquisition strategy, the system engineering plan, the integrated master schedule defining what needs to happen by when, and the acquisition program baseline. In April 2007, we reported that the key acquisition documents were over six months late from their original September 2006 due date, and we recommended that the appropriate executives immediately finalize them. This recommendation has not yet been addressed and agency officials subsequently extended the due dates of the documents to September 2007. During the past year, the program redefined the program’s deliverables, costs, and schedules, and renegotiated the NPOESS contract. To do so, the program developed a new program plan and conducted an integrated baseline review of the entire program, which validated that the new deliverables, costs, and schedules were feasible. It also completed key acquisition documents including the system engineering plan and the integrated master schedule. The program and the prime contractor signed a modified contract in July 2007. However, key activities remain to be completed, including obtaining executive approval of key acquisition documents. Specifically, even though agency officials were expected to approve key acquisition documents by September 2007, the appropriate executives have not yet signed off on documents including the tri-agency memorandum of agreement or the acquisition strategy report. They have also not signed off on the acquisition program baseline, the fee management plan, the test and evaluation master plan, and the two-orbit program plan (a plan for how to use European satellite data with NPOESS). Program officials stated that the program has been able to renegotiate the contract and to proceed in developing sensors and systems without these documents being signed because the documents have widespread acceptance within the three agencies. They reported that the delays are largely due to the complexity of obtaining approval from three agencies. For example, program officials reported that an organization within DOD suggested minor changes to the tri-agency memorandum of agreement after months of coordination and after it had already been signed by both the Secretary of Commerce and the Administrator of NASA. Further, after this issue was resolved, a senior official at DOD requested another change to the document. The program office has now made the recommended changes and is re-initiating the coordination process. More recently, in April 2008, DOD moved the due dates for all of the acquisition documents other than the memorandum of agreement and fee management plan from September 2007 to August 31, 2008. (See appendix I for the history of the due dates and status of each document). In addition, even though DOD has had a role in delaying these documents, the Department has stated it would not release fiscal year 2009 funds to the program if these acquisition documents are not completed by the new due date. Without executive approval of key acquisition documents, the program lacks the underlying commitment necessary to effectively manage a tri-agency program. In addition, given DOD’s newest instructions, any further delays in completing these acquisition documents could affect the program’s funding and schedule. Over the last year, the NPOESS program has made progress by completing planned development and testing activities on its ground and space segments, but key milestones for delivering the VIIRS sensor and launching NPP have been delayed by about 8 months. Moving forward, risks remain in completing the testing of key sensors and integrating them on the NPP spacecraft, in resolving interagency disagreements on the appropriate level of system security, and in revising estimated costs for satellite operations and support. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program’s overall schedule and cost. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and getting the ground-based data processing system developed, tested, and deployed, it is important for the NPOESS Integrated Program Office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. Development of the ground segment—which includes the interface data processing system, the ground stations that are to receive satellite data, and the ground-based command, control, and communications system—is under way and on track. For example, the Interface Data Processing System has been installed at one of the two locations that are to receive NPP data, and the command, control, and communications system passed acceptance testing for use with NPP. However, important work in developing the algorithms that translate satellite data into weather products within the integrated data processing segment remains to be completed. Table 4 describes each of the components of the ground segment and identifies the program-provided risk level and status of each. Over the past year, the program made progress on the development of the space segment, which includes the sensors and the spacecraft. Five sensors are of critical importance because they are to be launched on the NPP satellite. Initiating work on another sensor, the Microwave Imager Sounder, is also important because this new sensor—which is to replace the canceled Conical-scanned microwave imager/sounder sensor—will need to be developed in time for the second NPOESS satellite launch. Among other activities, the program has successfully completed vibration testing of the flight unit of the Cross-track infrared sounder (CrIS), a major pre-environmental testing review for the VIIRS instrument, integration and risk reduction testing of the flight unit of the Ozone mapper/profiler suite, and thermal testing of the NPP spacecraft with three sensors on board. In addition, the program made decisions on how to proceed with the Microwave imager sounder and recently awarded a contract to a government laboratory for its development. However, the program experienced problems on VIIRS, including poor workmanship on selected subcomponents and delays in completing key tests. These issues delayed VIIRS delivery to the NPP contractor by 8 months. This late delivery will in turn delay the satellite’s launch from late September 2009 to early June 2010. This delay shortens the time available for incorporating lessons learned from NPP while it is in orbit into future NPOESS missions and could lead to gaps in the continuity of climate and weather data if predecessor satellites fail prematurely. Also, the CrIS sensor experienced a cost overrun and schedule delays as the contractor worked to recover from a structural failure and is currently several weeks behind its schedule due to thermal vacuum testing taking longer than planned. The status and risk level of each of the components of the space segment is described in table 5. Moving forward, the program continues to face risks. Over the next 2 years, it will need to complete the development of the key sensors, test them, integrate and test them on the NPP spacecraft, and test these systems with the ground-based data processing systems. In addition, the program faces two other issues that could affect its overall schedule and cost. One is that there continues to be disagreement between NOAA and DOD on the appropriate level of system security. To date, NPOESS has been designed and developed to meet DOD’s standards for a mission essential system, but NOAA officials believe that the system should be built to meet more stringent standards. Implementing more stringent standards could cause rework and retesting, and potentially affect the cost and schedule of the system. Another issue is that program life cycle costs could increase once a better estimate of the cost of operations and support is known. The $12.5 billion estimated life cycle cost for NPOESS includes a rough estimate of $1 billion for operations and support. The NPOESS program office is working closely with the contractor and subcontractors to resolve these program risks. To address sensor risks, the program office and officials from NASA’s Goddard Space Flight Center commissioned an independent review team to assess the thoroughness and adequacy of practices being used in the assembly, integration, and testing of the VIIRS and CrIS instruments in preparation for the NPP spacecraft. The team found that the contractors for both sensors had sound test programs in place, but noted risks with VIIRS’s schedule and with CrIS’s reliability and performance. The program office adjusted the VIIRS testing schedule and is monitoring the CrIS testing results. In addition, the program office recently instituted biweekly senior-level management meetings to review progress on VIIRS’s development, and program officials noted that both the prime contractor and the program executive office will have senior officials onsite at the contractor’s facility to provide extensive, day-to-day oversight of management activities to assist in resolving issues. To address the risk posed by changing security requirements late in the system’s development, program officials commissioned a study to determine the effect of more stringent standards on the system. This study was completed in March 2008, but has not yet been released. To address the risk of cost growth due to poor estimates of operations and support costs, DOD’s cost analysis group is currently refining this estimate. Program officials estimated that the program costs could grow by about $1 billion, and expect to finalize revised operations and support costs in July 2008. The program office is aware of program risks and is working to mitigate them, but these issues could affect the program’s overall schedule and cost. Given the tight time frames for completing key sensors, integrating them on the NPP spacecraft, and getting the ground-based data processing system developed, tested, and deployed, it is important for the NPOESS program office, the Program Executive Office, and the Executive Committee to continue to provide close oversight of milestones and risks. When the NPOESS restructuring agreement removed four climate and space environment sensors from the program and degraded four others, it led NASA, NOAA, and DOD to reassess their priorities and options for obtaining climate and space environment data. Since the June 2006 restructuring decision, the three agencies have taken preliminary steps to restore the capabilities of selected climate and space weather sensors that were degraded or removed from the NPOESS program by prioritizing the sensors, assessing options for restoring them, and making decisions to restore selected sensors in order to mitigate near-term data gaps. However, the agencies have not yet developed plans to mitigate the loss of these sensors on a long-term basis. Best practices in strategic planning suggest that agencies develop and implement long-term plans to guide their short-term activities. Until such plans are developed, the agencies may lose their windows of opportunity for selecting cost-effective options or they may resort to an ad hoc approach to restoring these sensors. Lacking plans almost 2 years after key sensors were removed from the NPOESS program, the agencies face increased risk of gaps in the continuity of climate and space environment data. While NPOESS was originally envisioned to provide only weather observations, this mission was later expanded to include long-term continuity for key climate data. Maintaining the continuity of climate and space data over decades is important to identify long- term environmental cycles (such as the 11-year solar cycle and multiyear ocean cycles including the El Niño effect) and their impacts, and to detect trends in climate change and global warming. The Nunn-McCurdy restructuring decision removed four sensors and degraded the functionality of four other sensors that were to provide these data. DOD, NASA, and NOAA are now responsible for determining what to restore, how to restore it, and the means for doing so. This responsibility includes justifying the additional funding needed to develop these sensors within their respective agencies’ investment decision processes. Best practices of leading organizations call for defining a strategic plan to formalize priorities and plans for meeting mission goals. Such a plan would include the agency’s long-term goals for climate and space weather measurements, the short-term activities needed to attain these goals, and the milestones and resources needed to support the planned activities. Since the June 2006 restructuring, NASA, NOAA, and DOD have taken preliminary steps to restore sensor capabilities by determining priorities for restoring sensor capabilities, assessing options for obtaining sensor data over time, and making decisions to restore selected sensors. Specifically, in August 2006, the NPOESS Senior User Advisory Group—a group representing NASA, NOAA, and DOD system users—assessed the impact of the canceled or degraded sensors and identified priorities for restoring them. In January 2007, a NOAA and NASA working group on climate sensors prioritized which of the sensors were most important to restore for climate purposes and proposed possible solutions and mitigation efforts. Two other groups— the National Research Council and a NOAA-DOD working group—have also issued reports describing the impact of the loss of climate and space environmental sensors, respectively. Table 6 summarizes the results of these studies. In addition to prioritizing the sensors, NASA, NOAA, and DOD identified a variety of options for obtaining key sensor data over the next two decades and continue to seek other options. The agencies identified options including adding sensors back to a later NPOESS satellite, adding sensors to another planned satellite, and developing a new satellite to include several of the sensors. Examples of options for several sensors are provided in figure 1. In addition, in December 2007, NOAA released a request for information to determine whether commercial providers could include selected environmental sensors on their satellites. In addition to prioritizing sensors and identifying options, over the last year, NASA, NOAA, and DOD have taken steps to restore three sensors on a near-term basis. Specifically, in April 2007, the NPOESS Executive Committee decided to restore the limb component of the Ozone mapper/profiler suite to the NPP satellite; in January 2008, to add the Clouds and the earth’s radiant energy sensor to NPP; and in May 2008 to add the Total solar irradiance sensor to the first NPOESS satellite. These decisions are expected to provide continuity for these sensors through approximately 2015. Table 7 shows the latest planned configuration of NPOESS satellites. NASA officials noted that they also took steps to mitigate a potential gap in total solar irradiance data by proposing to fund an additional 4 years of the SORCE mission (from 2008 to 2012). While NASA, NOAA, and DOD have taken preliminary steps to address the climate and space sensors that were removed from the NPOESS program almost 2 years ago, they do not yet have plans for restoring climate and space environment data on a long-term basis. Specifically, there are as yet no firm plans for obtaining most of this data after 2015. The Office of Science and Technology Policy, an organization within the Executive Office of the President, is currently working with NASA, NOAA, and DOD to sort through the costs and benefits of the various options and to develop plans. However, this effort has been under way for almost 2 years and officials could not estimate when such plans would be completed. Delays in developing a comprehensive strategy for ensuring climate and space data continuity may result in the loss of selected options. For example, NASA and NOAA estimated that they would need to make a decision on whether to build another satellite to obtain ocean altimeter data in 2008. Also, the NPOESS program office estimated that if any sensors are to be restored to an NPOESS satellite, it would need a decision about 6 years in advance of the planned satellite launch. Specifically, for a sensor to be included on the second NPOESS satellite, the sponsoring agency would need to commit to do so by January 2010. Without a timely decision on a plan for restoring satellite data on a long-term basis, NASA, NOAA, and DOD risk losing their windows of opportunity on selected options and restoring sensors in an ad hoc manner. Ultimately, the agencies risk a break in the continuity of climate and space environment data. As national and international concerns about climate change and global warming grow, these data are more important than ever to try to understand long-term climate trends and impacts. Because of the importance of effectively managing the NPOESS program to ensure that there are no gaps in the continuity of critical weather, environmental, and climate observations, in our accompanying report we made recommendations to the Secretaries of Commerce and Defense and to the Administrator of NASA to establish plans on whether and how to restore the climate and space sensors removed from the NPOESS program by June 2009, in cases where the sensors are warranted and justified. In their comments on the report, all three agencies concurred with our recommendations. In addition, both the Department of Commerce and NASA reiterated that they are working with their partner agencies to finalize plans for restoring sensors. In addition, we also reemphasized a recommendation made in our prior report that the appropriate NASA, NOAA, and DOD executives immediately finalize key acquisition documents. All three agencies also concurred with this recommendation. Further, Commerce noted that DOD and NASA executives need to weigh in to resolve issues at, or immediately below, their levels in order to ensure prompt completion of the key acquisition documents. NASA noted that difficulties in gaining consensus across all three NPOESS agencies have delayed the signature of key acquisition documents, and reported that they are committed to moving these documents through the signature cycle once all of the issues and concerns are resolved. In summary, over the past year, program officials have completed major activities associated with restructuring the NPOESS program and have made progress in developing and testing sensors, ground systems, and the NPP spacecraft. However, multiple risks remain. Agency executives have still not signed off on key acquisition documents that were originally to be completed in September 2006, and now DOD is threatening to withhold funding if the documents are not completed by August 2008—even though DOD has contributed to the delays in completing these documents. Also, one critical sensor has experienced technical problems and schedule delays that have led program officials to delay the NPP launch date by about 8 months. Any delay in the NPP launch date shortens the time available for incorporating lessons learned from NPP onto future NPOESS missions and could also lead to gaps in critical climate and weather data. In addition, risks to the program remain in resolving interagency disagreements on the appropriate level of system security and in revising estimated costs for satellite operations and support. The program office is aware of these risks and is working to mitigate them, but continued problems could affect the program’s overall schedule and cost. When selected climate and space weather sensors were removed from the NPOESS program during its restructuring, NASA, NOAA, and DOD became responsible for determining what environmental data to restore and how to restore them. This responsibility includes justifying the additional funding needed to develop these sensors within their respective agency’s investment decision processes. In the 2 years since the restructuring, the agencies have identified their priorities and assessed their options for restoring sensor capabilities. In addition, the agencies made decisions to restore two sensors to the NPP satellite and one to the first NPOESS satellite in order to mitigate near-term data gaps. However, the agencies lack plans for restoring sensor capabilities on a long-term basis. Without a timely decision on a long-term plan for restoring satellite data, the agencies risk a break in the continuity of climate and space environment data. With the increased concern about climate change and global warming, these data are more important than ever to try to understand long-term climate trends and impacts. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the committee may have at this time. If you have any questions on matters discussed in this testimony, please cont act me at (202) 512-9286 or by e-mail at [email protected]. Other key contributors to this testimony include Colleen Phillips (Assistant Director), Kate Agatone, and Kathleen S. Lovett. Table 1 identifies the key NPOESS acquisition documents as well as their original and revised due dates. Original due dates were specified in the June 2006 restructuring decision memo. The revised due dates were specified in an addendum to that memo, dated June 2007, and then revised again in another addendum, dated April 2008. Documents that are in bold are overdue.
|
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) is a tri-agency acquisition--managed by the Department of Commerce's National Oceanic and Atmospheric Administration (NOAA), the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA)--which has experienced escalating costs, schedule delays, and technical difficulties. These factors led to a June 2006 decision to restructure the program thereby decreasing its complexity, increasing its estimated cost to $12.5 billion, and delaying the first two satellites by 3 to 5 years. GAO was asked to summarize a report being released today that evaluates progress in restructuring the acquisition, assesses the status of key program components and risks, and assesses the agencies' plans for obtaining the data originally planned to be collected by NPOESS sensors, but eliminated by the restructuring. The NPOESS program office has completed most of the major activities associated with restructuring the acquisition, but key activities remain to be completed. In the past year, the program redefined the program's deliverables, costs, and schedules, and renegotiated the NPOESS contract. However, agency executives have not yet finalized selected acquisition documents. Without executive approval, the program lacks the underlying commitment needed to effectively manage a tri-agency program. In addition, given that DOD has stated it would not release fiscal year 2009 funds to the NPOESS program if key acquisition documents are not completed by August 2008, delays in completing these documents could affect the program's funding and schedule. In the past year, the NPOESS program has made progress in completing development and testing activities associated with the spacecraft, sensors, and ground systems. However, key milestones have been delayed and multiple risks remain. Specifically, poor workmanship and testing delays caused an 8-month slip in the delivery of a complex imaging sensor called the Visible/infrared imager radiometer suite. This late delivery caused a corresponding 8-month delay in the expected launch date of the NPOESS Preparatory Project demonstration satellite, moving it from late September 2009 to early June 2010. Moving forward, risks remain in completing the testing of key sensors and integrating them on the spacecraft, resolving interagency disagreements about the appropriate level of system security, and revising outdated operations and support cost estimates--which program officials say could increase the lifecycle cost by about $1 billion. The program office is aware of these risks and is working to mitigate them, but these issues could affect the program's overall schedule and cost. When the NPOESS restructuring agreement removed four climate and space environment sensors from the program and degraded four others, it led NASA, NOAA, and DOD to reassess their priorities and options for obtaining climate and space environment data. Since the June 2006 restructuring decision, the three agencies have taken preliminary steps to restore the capabilities of selected climate and space weather sensors that were removed from the NPOESS program by prioritizing the sensors, assessing options for restoring them, and making decisions to mitigate near-term data continuity needs by restoring two sensors to the demonstration satellite and one sensor to the first NPOESS satellite. However, the agencies have not yet developed plans on whether and how to replace sensors on a long-term basis as no plans have been made for sensors or satellites after the first satellite of the program. Until such a plan is developed, the agencies may lose their windows of opportunity for selecting cost-effective options or they may resort to an ad hoc approach to restoring these sensors. Almost 2 years have passed since key sensors were removed from the NPOESS program; further delays in establishing a plan could result in gaps in the continuity of climate and space data.
|
Information systems are critical to the health, economy, and security of the nation. To support these systems, the federal government plans to invest more than $89 billion on IT in fiscal year 2017. However, prior IT expenditures too often have produced failed projects—that is, projects with multimillion dollar cost overruns, schedule delays measured in years, and questionable mission-related achievements. These failed projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT investments. Based on these issues, in 2015, we designated the management of IT acquisitions and operations across the federal government as high risk. DHS has been challenged in improving the management of its IT projects. We have reported on these challenges since shortly after the department was created in 2002. In particular, we have reported on DHS’s need to improve its executive oversight of IT investments and its use of key program management practices. In 2003, we also designated the transformation of DHS as high risk because it had to transform 22 agencies—several with major management challenges—into one department. The department subsequently made important progress in implementing its range of missions and in strengthening and integrating its management functions (e.g., acquisition, financial, and IT). However, in 2015 we reported that, among other things, additional work was needed for DHS to continue to improve its IT management. We have made numerous recommendations to help the department address these challenges. Over the last three decades, Congress has enacted several laws to assist the federal government in managing IT investments. For example, the Paperwork Reduction Act of 1995 required OMB to develop and oversee policies, principles, standards, and guidelines for federal agency IT functions. It also required individual agencies to establish processes for maximizing the value and managing the risk of major information system initiatives. The following year, in 1996, Congress enacted the Clinger-Cohen Act to strengthen those requirements by, among other things, mandating the appointment of agency CIOs. Under these two laws, CIO responsibilities for IT management include implementing and enforcing applicable government-wide and agency IT management principles, standards, and guidelines; assuming responsibility and accountability for IT investments; and monitoring the performance of IT programs and advising the agency head on whether to continue, modify, or terminate such programs. More recently, in December 2014, Congress passed IT reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act or FITARA). This law holds promise for improving agencies’ acquisitions of IT and enabling Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes provisions related to seven areas of management— referred to as FITARA sections. Five of these sections are applicable to DHS as a covered agency; six are applicable to OMB in its executive branch budget and policy role; and six are applicable to the General Services Administration, both as a covered agency and in its government- wide acquisition role: Agency CIO authority enhancements. Agency CIOs are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing OMB’s incremental development guidance, (3) review and approve contracts for IT prior to award, and (4) approve the appointment of other agency employees with the title or functions of component CIO. Enhanced transparency and improved risk management. OMB and agencies are to make publicly available detailed information on federal IT investments, and agency CIOs are to categorize their investments by risk. In addition, in the case of major investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Portfolio review. Agencies are to annually review their IT investment portfolios in order to, among other things, increase efficiency and effectiveness, and identify potential waste and duplication. Federal data center consolidation initiative. Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Maximizing the benefit of the federal strategic sourcing initiative. OMB is to issue regulations requiring that federal agencies compare their purchases of services and supplies to what is offered under the federal strategic sourcing initiative. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. Most of these FITARA sections relate to our high-risk topic on the government-wide management of IT acquisitions and operations. With regard to this topic, for example, we focus on the need for CIO authority enhancements, portfolio reviews, and federal data center consolidation. In June 2015, OMB released guidance that describes how agencies are to implement FITARA. Among other things, this guidance outlined topic areas related to agency CIOs’ roles and responsibilities—referred to as OMB’s common baseline sections. For example, the CIO is responsible for engaging with program managers, reviewing and approving the IT budget request, and developing the IT workforce. Table 1 identifies OMB’s 17 common baseline sections and associated topics. OMB also developed an assessment template for agencies to use to assess their current practices against the common baseline sections— referred to as a self-assessment. Based on OMB’s guidance, agencies are expected to use the template to document areas where they are not in conformance with the baseline sections. The guidance also directed the agencies to develop action plans describing the changes they needed to make in order to conform to the baseline sections. The guidance further directed agencies to conduct an annual review and to update the self-assessment, with the first update to be completed by the end of April 2016. In response to the guidance, in November 2015, DHS submitted to OMB a self-assessment of its conformance with the common baseline. As a result of the assessment, DHS identified 130 action plans that it intended to implement to ensure that the department would meet all baseline responsibilities. According to the assessment, the department originally planned to implement all of the action plans by the end of May 2016. However, the department updated its assessment in April 2016 and revised the number of action plans to 131. It also deferred the implementation of certain action plans and revised the final time frame by which it expected to implement all of the plans to the second quarter of fiscal year 2018. As of April 2016, the department reported to OMB that it had fully implemented 109 of its 131 action plans. Appendix I lists the department’s 131 action plans and the respective OMB common baseline sections with which they are associated. DHS acquires IT and other capabilities that are intended to improve its ability to execute its mission to prevent and deter terrorist attacks, and protect against and respond to threats and hazards to the nation. In accordance with OMB guidance, the department classifies its IT investments as major and non-major investments. DHS’s capital planning guidance states that the department’s major investments are those that are expected to cost $50 million or more over their life cycles, while non-major investments are those expected to cost less than $50 million over their life cycles. According to data that DHS reported to OMB’s IT Dashboard, the department had 92 major IT investments in fiscal year 2016 and planned to spend about $5.1 billion on them during that year. DHS’s Under Secretary for Management is designated as the department’s Chief Acquisition Officer and, as such, is responsible for managing the implementation of department-wide acquisition policies. To help manage and oversee the department’s investments, DHS’s Office of Program Accountability and Risk Management is responsible for the department’s overall acquisition governance process and is to report directly to the Under Secretary for Management. Specifically, this office has the responsibility to develop and update program management policies and practices, facilitate and assist in the review of major programs, provide guidance for workforce planning activities, provide support to program managers, and collect program performance data. Further, per the department’s policy, DHS’s CIO, who also reports to the Under Secretary for Management, is responsible for setting departmental IT policies, processes, and standards. This official also is to ensure that IT acquisitions comply with the department’s IT management processes, technical requirements, and approved enterprise architecture, among other things. Within the OCIO, EBMO has been given primary responsibility for administering the CIO’s responsibilities and, as such, for ensuring that the department’s IT investments align with its missions and objectives. EBMO is also responsible for leading the implementation of DHS’s FITARA action plans. Figure 1 shows the key department-level organizations with IT acquisition management responsibilities at DHS. To help manage the department’s IT acquisitions, DHS implemented a governance process—referred to as the IT Acquisition Review process— which is managed by EBMO. This governance process is intended to ensure IT acquisitions align with DHS’s missions and policies. As part of this process, the CIO is responsible for reviewing, prior to award, contracts and agreements that have planned values of $2.5 million or more, among other criteria. In addition, DHS’s components that have a CIO are to review, for their respective component, contracts and agreements with planned values of less than $2.5 million, among other criteria. DHS developed plans, including 131 action plans, that addressed the five sections of FITARA that were applicable to the department. Further, as of December 2016, DHS fully implemented 28 of the 31 action plans we selected for review. However, we identified 3 action plans that the department has not fully implemented because specific actions called for in these plans had not been undertaken. Ensuring that its action plans are fully implemented will better position DHS to effectively manage the department’s IT acquisitions, consistent with FITARA. DHS developed 131 action plans that, collectively, addressed three of the five applicable FITARA sections: (1) agency CIO authority enhancements, (2) portfolio reviews, and (3) development and deployment of an IT acquisition cadre. For example, related to the agency CIO authority enhancements section—which requires DHS to, among other things, approve the department’s IT budget requests—the department developed action plans for identifying and reviewing relevant policies that impact the processes, roles, and responsibilities within DHS’s budget phases; documenting and modeling the current processes; identifying gaps and opportunities in those current processes; and documenting and implementing updated policies to ensure the DHS CIO is involved in the department’s IT budgeting decisions and the management of IT programs. In addition, DHS developed action plans that relate to the portfolio review section of FITARA. This section requires the department to annually review its portfolios of IT investments in order to, among other things, identify potential duplication in similar investments within the portfolios. DHS’s action plans to address this section included, among other things, identifying gaps in the department’s current processes where OMB’s common baseline requirements were not satisfied; updating relevant policies and guidance to collect the necessary information related to executing the IT budget; and ensuring policy updates are approved by relevant parties. The department also created action plans to address the section of FITARA related to IT acquisition cadres. Specifically, this section requires the department to consider developing and implementing a cross- functional group trained in IT program management and acquisition. For example, DHS created action plans for identifying training opportunities that will enhance staff development at multiple levels, developing a workforce planning process for assessing the department’s current technology skills, identifying existing employee skillsets related to acquisition and IT, and aligning the department’s existing course inventory to acquisition certifications and specializations in IT. Beyond the 131 FITARA action plans, the department developed a separate plan that addressed the FITARA section that requires DHS to consolidate its data centers. Specifically, DHS developed a strategic plan that describes how it intends to implement OMB’s data center consolidation guidance. In addition, the department previously developed a plan that is consistent with the remaining section of FITARA that is applicable to DHS— enhanced transparency and improved risk management. This section requires the department to make publicly available detailed information on its federal IT investments, and the CIO to categorize the department’s investments by risk. Related to the requirements in this section, in 2013, DHS issued a plan which stated that IT programs’ risks were to be assessed on a regular basis and that the assessments would serve as the basis for the ratings to be regularly published on OMB’s IT Dashboard. (This plan is discussed in more detail later in the report.) As a result of the department developing these plans to implement FITARA, it should be better positioned to implement the act. Further, doing so has more effectively positioned the department to take steps to elevate the role of its CIO and improve the oversight of its IT acquisition and management. Of the 31 selected action plans that we reviewed, 28 of them (about 90 percent) have been fully implemented as of December 2016 (that is, the evidence provided by DHS fulfilled all aspects of the action plan’s description), as the department reported. However, 3 action plans (about 10 percent) have not been fully implemented, even though the department reported them as fully implemented. In these instances, the evidence of the actions taken by the department fulfilled some, but not all, aspects of the action plan’s description. Table 2 provides the implementation status of the 31 selected action plans and the OMB common baseline sections associated with each of these action plans. For the 28 selected action plans that the department fully implemented, DHS officials had, for example, updated multiple policies related to the department’s planning, programming, and budgeting phases; ensured that the updated policies were approved by relevant parties; and submitted the updated policies to OMB. In addition, the department documented and implemented updated processes for the planning, programming, and budgeting phases to ensure that the CIO has, among other things, visibility into IT resource plans and decisions. Further, the department revised relevant documentation and processes to reflect the CIO’s responsibility to approve components’ requests for reprogramming or transferring IT resources. However, the remaining 3 selected action plans were not yet fully implemented due to two factors: (1) the steps taken did not address all planned actions and (2) DHS updated its policies with conflicting guidance. Specifically, DHS’s steps to implement action plans 19 and 20 addressed part, but not all, of these plans. Related to action 19—to leverage the updated DHS TechStat process to provide support to failing or troubled programs—OCIO officials were in the process of updating the department’s TechStat policy to comply with FITARA, but had not completed the update. As of December 2016, the officials stated that they could not provide a date for when the policy would be finalized. Until the CIO, who is responsible for establishing departmental IT policies, finalizes the TechStat policy, the department will be limited in its ability to ensure that DHS is meeting FITARA’s IT acquisition reform goals, as well as consistently providing support to failing or troubled programs. With regard to action 20—to ensure the IT Program/Project Manager Center of Excellence reviews IT performance metrics and strategies— DHS developed IT performance metrics. However, as of December 2016, EBMO officials stated that the Center of Excellence had not begun using these metrics across all programs to identify poorly performing programs. These officials told us that they expected the Center of Excellence to begin using these metrics across all programs to identify those needing assistance in the second quarter of fiscal year 2017. Use of these metrics by the center will be vital to its ability to proactively identify poorly performing programs and help them to improve their performance. With regards to action 24, which required that the DHS CIO certify investments’ incremental development activities, the department updated its multiple systems engineering life-cycle policies and guidance documents with conflicting information regarding who was to certify these development activities. While one of the policies was updated to specify that the DHS CIO was the certifier, another policy and a guidance document was updated to specify that the component CIO was the certifier. Officials from EBMO and the Office of Program Accountability and Risk Management stated that these documents were not written at the same time and, as a result, reflected conflicting policies and guidance that needed further clarification. However, the officials did not state when they intended to make the clarifications and updates to the policies and guidance. Until the Under Secretary for Management, who is responsible for managing the implementation of department-wide acquisition policies, updates DHS’s relevant policies and guidance in a consistent manner to identify that the DHS CIO is to certify investments’ incremental development activities, the department is at risk of excluding the CIO from important investment oversight activities. DHS currently faces a number of important challenges in implementing several selected FITARA provisions. These provisions relate to (1) the CIO’s approval of IT contracts and agreements before award, (2) the CIO’s evaluation of each major IT investment according to risk, and (3) the development and deployment of an IT acquisition cadre. While the department has taken steps aimed at addressing these challenges, more work remains. Moreover, until the department takes actions that fully address these challenges, the goal of FITARA to elevate the role of the department CIO may not be fully realized. FITARA prohibits a covered agency (such as DHS) from entering into a contract or agreement for IT or IT services (associated with major and non-major investments), unless the contract or agreement has been reviewed and approved by the agency CIO. FITARA allows the CIO to delegate these review and approval duties if a contract or agreement is to support a non-major IT investment. In such cases, the delegated official must report directly to the agency CIO. Accordingly, in order to properly distinguish the appropriate approving official, per FITARA, it is necessary for an agency to determine whether each IT contract and agreement is associated with a major or non-major investment. Alternatively, FITARA states that an agency may use its governance processes to approve any contract or agreement (associated with major investments), if the agency CIO is a full participant in the governance processes. Further, when governance processes are used for review of contracts or agreements associated with non-major IT investments, the CIO or an individual who reports directly to the agency CIO must be a full participant in the governance processes. While DHS used its governance process (e.g., the CIO’s IT Acquisition Review process, discussed earlier) to approve contracts and interagency agreements associated with major and non-major investments, the DHS CIO did not directly review or approve any of the contracts or interagency agreements that we examined. Furthermore, the CIO or an appropriate delegate was not always a full participant in the department’s use of its governance process to approve the contracts and interagency agreements that we reviewed, as required by FITARA. Specifically, Of the 48 contracts and 8 interagency agreements in our sample that department officials associated with major investments (i.e., those requiring additional management attention because of, among other things, their significance to the department’s mission or high costs, as defined by OMB), the DHS CIO neither directly reviewed, nor participated in the governance process to review, any of those contracts or agreements, as required by FITARA. Instead, all of the contracts and interagency agreements were reviewed by either the Executive Director or Deputy Executive Director of EBMO, or a component official, which was not in compliance with FITARA. While an appropriate delegate who reported directly to the department CIO participated in the review of 5 of the 21 selected contracts that DHS officials associated with non-major investments, the department CIO or an appropriate delegate did not participate in the review of the remaining 16 contracts (about 76 percent). In addition, neither the DHS CIO nor an appropriate delegate participated in the review of any of the 5 interagency agreements in our sample that were associated with non-major investments. Instead, these contracts and interagency agreements were reviewed and approved, as part of the governance process, by someone who did not report directly to the DHS CIO, such as a deputy assistant commissioner or a management analyst. Such review and approval was not consistent with FITARA. Table 3 summarizes the number of selected contracts and interagency agreements that were and were not reviewed by the appropriate official prior to award, as required by FITARA. Further, the department CIO did not prioritize the reviews of contracts associated with major IT investments, even for those with known performance problems. For example, three of the contracts in our sample were associated with two major DHS IT investments with past or existing performance issues: Customs and Border Protection’s Automated Commercial Environment investment and United States Citizenship and Immigration Services’ Transformation investment. We have previously reported on significant performance problems with these investments. However, the DHS CIO did not directly review and approve the contracts for these troubled investments, as required by FITARA for contracts associated with major investments. Instead, the Customs and Border Protection CIO and the Executive Director of EBMO reviewed the contracts for these investments, respectively. According to OCIO officials, the reason why the department CIO delegated the approval of contracts and agreements in a way that was inconsistent with FITARA was that DHS had a large volume of contracts, which made it challenging for the department CIO and those who reported to the CIO to review every contract and agreement. Specifically, data provided by DHS showed that, in fiscal year 2016, the department awarded approximately 5,100 contracts for IT or IT services. According to DHS officials, as a work around to this resource constraint, the department CIO delegated the review and approval of contracts and agreements to EBMO or component officials. OCIO officials recognized that the department needs to make improvements to better meet the intent of FITARA’s contract and agreement approval section and they have begun taking steps to do so. For example, in May 2016, the department updated its department-wide acquisition procedures to require greater participation in the acquisition planning process by the DHS CIO and component CIOs. Specifically, the updated procedures specify that the DHS CIO is required to review and sign the acquisition plans—which are developed early in the procurement planning process and provide top-level plans for the overall acquisition approach—associated with major IT acquisitions that have estimated life-cycle costs of greater than $50 million or service acquisitions with an annual expenditure of $100 million or more. Additionally, the updated procedures specify that the component CIOs are to review and sign the acquisition plans for all acquisitions involving sensitive information. Further, in October 2016, OCIO updated its associated IT Acquisition Review governance process to implement these new procedures. Nevertheless, while these updates to the department-wide acquisition procedures and governance process represent improvements by allowing the CIO and component CIOs insight into early procurement planning, the CIO’s visibility into contracts is limited because these top-level acquisition plans do not include important details (e.g., the full scope of the work to be performed) that are contained in specific contracts. Additionally, the department’s governance process requires contracts or agreements that are associated with major investments and that have total estimated procurement values of at least $2.5 million to be submitted to the DHS OCIO for review. However, these processes still do not require contracts and agreements that are associated with major investments and are under this threshold to be submitted for CIO review, which is inconsistent with FITARA. In response to our concerns, in April 2017, OCIO officials stated that they had begun to analyze how they could best increase the CIO’s and appropriate delegates’ reviews of contracts and agreements, while considering the department’s staffing constraints. The officials also stated that, once this analysis is complete, they plan to update their governance process accordingly; however, they did not know when these actions would be completed. Until the governance process is updated in a way that increases the CIO’s and appropriate delegates’ reviews of contracts and agreements associated with major and non-major investments, the DHS CIO will continue to have limited visibility into the department’s planned IT expenditures. Additionally, the CIO may lack critical data to make investment decisions and may not be able to use the increased authority that FITARA’s contract and agreement approval provision is intended to provide. Further exacerbating this issue, FITARA does not allow agency CIOs to delegate the review and approval of contracts and agreements associated with major investments, but there were many contracts and interagency agreements in our sample for which DHS officials were unable to map to a major or non-major IT investment; as such, they could not ensure that these contracts and agreements were reviewed by the appropriate officials. Specifically, officials from DHS headquarters, Customs and Border Protection, and the U.S. Coast Guard were unable to map 23 of the 92 contracts (about 25 percent) and 11 of the 24 interagency agreements (about 46 percent) in our sample to a major or non-major IT investment. The officials cited various reasons for why they could not map these contracts and interagency agreements to a major or non-major IT investment. Specifically, OCIO officials stated that only contracts and agreements that go through the department’s headquarters-level contract approval process (i.e., defined by DHS as those valued at $2.5 million or over and are associated with major investments) are required to identify the associated investments. These officials stated that, at the headquarters level, the department does not ask about the investments associated with contracts and agreements that do not go through this headquarters-level contract approval process. While Customs and Border Protection officials were able to identify the IT investments associated with the majority of their contracts and interagency agreements in our sample, these officials stated that certain contracts were not associated with planned IT investments. Rather, according to the officials, these contracts were to address emerging needs (e.g., a need for new laptops) that Customs and Border Protection offices had identified that were not originally planned as part of an investment. U.S. Coast Guard officials stated that their process for accounting for all IT costs does not include a mapping of every contract or agreement to a major or non-major IT investment. These officials said they were working with DHS headquarters to improve their process for tracking contracts and agreements associated with IT investments, but did not specify a time frame for completing this effort. Until the Under Secretary for Management updates DHS headquarters’, Customs and Border Protection’s, and U.S. Coast Guard’s processes to track, for all contracts and agreements, the IT investment with which each is associated (as applicable), the department will be challenged in its ability to ensure that the contracts and agreements that are associated with these investments receive the appropriate level of oversight. FITARA requires each agency CIO to categorize its major IT investments according to risk, in accordance with guidance issued by the Director of OMB. In this regard, OMB issued guidance in June 2015 that directed agency CIOs to evaluate and categorize (i.e., rate) the risk of each major IT investment. In addition, OMB’s guidance directs agencies to report their CIO risk ratings on OMB’s public website known as the IT Dashboard. Prior to October 2016, DHS’s OCIO, on behalf of the CIO, was conducting such evaluations on the department’s major IT investments in accordance with OMB’s six criteria. The office was also regularly updating the associated CIO risk ratings on the IT Dashboard, as required by FITARA and OMB. However, as of October 2016, the CIO was no longer directly responsible for the full evaluations or the associated risk ratings that are publicly reported on the IT Dashboard for approximately one-third of the department’s major IT investments. This was due to DHS’s Under Secretary for Management issuing a new policy in October 2016 that assigned responsibility for collecting the appropriate acquisition program data for evaluating the health of all level one and level two major acquisition programs (both IT and non-IT) that are on the department’s Master Acquisition Oversight List to the Office of Program Accountability and Risk Management. According to EBMO officials, as of December 2016, these level one and level two investments that the Office of Program Accountability and Risk Management was to facilitate the evaluation of included 30 of DHS’s 93 major IT investments. DHS’s policy further states that the department CIO is to report the ratings that are facilitated by the Office of Program Accountability and Risk Management on these 30 IT investments to OMB’s IT Dashboard. The officials also stated that OCIO is to continue to have responsibility for the evaluations of the 63 other IT investments not on that oversight list, and for reporting the associated risk ratings of these investments to the IT Dashboard. According to the Office of Program Accountability and Risk Management’s evaluation template, 39 factors, each with an associated weight, are to be considered in conducting the evaluations, and each factor is to be assessed by different organizations and officials within the department. These organizations and officials include, among others, the Offices of Program Accountability and Risk Management, the Chief Procurement Officer, the Chief Information Officer, Systems Engineering, and the Chief Financial Officer; as well as the Director of Operational Test and Evaluation, and the Joint Requirements Council. After all of the offices prepare their parts of the assessment, the Office of Program Accountability and Risk Management is to calculate a final evaluation rating based on the 39 factors and their weights. For its part, the CIO is responsible for assessing the 30 IT investments against 10 of the 39 factors, which accounts for about 18 percent of the total assessment score. Thus, over 80 percent of the evaluation and final assessment score for the investments included in the evaluation facilitated by the Office of Program Accountability and Risk Management does not involve the key IT management executive—the CIO. Moreover, DHS’s CIO was previously responsible for evaluating and reporting the associated risk ratings of the department’s 30 major IT investments on the Master Acquisition Oversight List against the criteria that OMB’s 2015 guidance stated CIOs may use to evaluate and report the risk of their programs. However, as shown in table 4, under the new process facilitated by the Office of Program Accountability and Risk Management, the CIO is only responsible for assessing these investments against one of OMB’s criteria. Further, while the Office of Program Accountability and Risk Management is responsible for facilitating the development of the risk ratings that are reported to the IT Dashboard for these 30 IT investments, as of December 2016, according to DHS officials, OCIO was also conducting a separate evaluation on these investments. Specifically, OCIO officials stated that they have continued to conduct their own evaluations in order to meet OCIO’s other investment oversight responsibilities. As such, the Under Secretary for Management’s October 2016 assignment of responsibility for facilitating the assessment of these investments to the Office of Program Accountability and Risk Management is not only in conflict with FITARA, but also in conflict with guidance the Acting Deputy Under Secretary for Management issued in April 2015 in response to our prior recommendation to the department. Specifically, in March 2015, we reported that there were overlapping responsibilities and duplicative efforts between the Office of Program Accountability and Risk Management and the OCIO in the oversight and management of IT investments on the Master Acquisition Oversight List. We recommended in our 2015 report that DHS develop written guidance to clarify the roles and responsibilities of the Office of Program Accountability and Risk Management and OCIO for conducting oversight of major acquisition programs. In response to our recommendation, in April 2015, the Acting Deputy Under Secretary for Management issued guidance that clarified that the CIO is responsible for performing the program assessments for the IT investments on the Master Acquisition Oversight List, which then are to be reported on the IT Dashboard. Accordingly, the Under Secretary for Management’s recent change suggests that the issue of overlapping responsibilities and duplicative efforts between the Office of Program Accountability and Risk Management and the OCIO in the oversight and management of certain IT investments that we raised 2 years ago has not yet been adequately addressed within the department. Thus, rather than elevating the CIO’s role per the goal of FITARA, the recent change in DHS’s evaluation of these IT investments is achieving the opposite effect by reducing the CIO’s role and creating a barrier for this official to appropriately report investment risk ratings to the Dashboard. According to EBMO officials, the department’s goal is to use one evaluation process that covers all major IT investments in order to ensure consistency across all evaluations reported on the Dashboard. However, as of December 2016, DHS officials did not know when the department would begin using only one evaluation process for its major IT investments, or who would be responsible for those reviews under that single process. Until the Under Secretary for Management updates and implements the process that the department uses for assessing the risks of major IT investments to ensure that the ratings reported fully reflect the CIO’s assessment of each major IT investment, Congress’ and the public’s insight into the assessment of each major investment’s risk and performance will be limited. FITARA requires agencies to update their acquisition human capital plans to address how the agencies are meeting their human capital requirements. In particular, the act requires agencies to consider, among other things, establishing cross-functional groups trained in IT program management and IT acquisition—referred to as IT acquisition cadres. In July 2011 (prior to the enactment of FITARA), OMB’s Office of Federal Procurement Policy issued guidance that identified key knowledge areas essential for such a cadre, including, among other things, IT strategic planning, acquisition planning, information security requirements, risk management, requirements definition, and contract management. We have also previously issued a human capital guide that stresses the importance of federal agencies ensuring that their employees have the skills needed to perform effectively and achieve agency goals. Our guidance states that, among other things, federal agencies need to determine what skills and competencies are necessary in order to meet current and future challenges, assess any gaps in current skills and competencies, and address those gaps. Although DHS has taken certain actions toward implementing an IT acquisition cadre and developing an acquisition human capital plan, the department has experienced challenges in fully implementing this FITARA provision. Specifically, DHS has not defined its IT acquisition cadre. While DHS updated its acquisition human capital plan in April 2016 to address its use of the procedures required by FITARA, the department faces challenges in strengthening its IT acquisition cadre because it has not yet identified the specific positions or personnel that are to be included in the cadre. To its credit, the department identified the number of acquisition personnel that it has in multiple functional areas, such as its project/program managers, contracting officers, and system engineering staff. However, it has not determined how many of those staff are knowledgeable in IT investment management and whether they should be considered a part of the IT acquisition cadre. The department also reported in its April 2016 acquisition human capital plan that directors and project/program managers within OCIO are required to maintain appropriate certifications to oversee IT acquisitions. However, it has not determined whether this group of workforce professionals has the specialized skills and knowledge needed in all of the areas outlined in OMB’s Office of Federal Procurement Policy’s guidance. EBMO officials told us that they hope to define the entire IT acquisition cadre through a survey and/or skills assessment during fiscal year 2017; however, specific plans for doing so had not been established. Until the CIO establishes time frames and implements a plan for (1) identifying the specific staff or positions currently within its IT acquisition cadre; and (2) assessing whether these staff and positions address all of the specialized skills needed, as outlined in the Office of Federal Procurement Policy’s cadre guidance, the department risks not having the critical skills needed to effectively acquire IT services. In addition, the department will continue to be challenged in its ability to meet FITARA’s intent of making timely progress toward developing and strengthening its IT acquisition cadre. DHS lacks clarity on the acquisition skills needed to support its new IT delivery model. DHS’s IT Strategic Plan for fiscal years 2015 through 2018 calls for a paradigm shift in the department’s IT delivery model—from acquiring IT assets to acquiring services, and acting as a service broker (e.g., an intermediary between the purchaser of a service and the seller of that service). According to OCIO officials, this shift will require a significant change in the skillsets of DHS’s employees. However, the department has faced challenges in implementing this new IT delivery model because it has not identified its future skillset needs or determined the gaps, if any, between its employees’ current skillsets and its future needs. DHS awarded a workforce management contract in July 2016 to, among other things, assist with the implementation of the new IT delivery model at headquarters, including defining future IT skill sets needed and conducting a skills gap analysis. However, while EBMO officials stated in December 2016 that they would conduct these activities by the end of fiscal year 2017, the department did not have a specific plan for when it would identify its future IT skillset needs, or analyze and address the skills gaps resulting from the new delivery model. Until the CIO establishes time frames and implements a plan for (1) identifying future IT skillset needs to support DHS’s new delivery model, (2) conducting a skills gap analysis, and (3) resolving any skills gaps identified, the department will continue to be challenged in its ability to ensure that it has the skillsets necessary to perform the new responsibilities associated with the shift. In response to FITARA, DHS has taken several key steps toward improving the department-level CIO’s role in IT acquisitions, including updating the department’s acquisition governance process and associated guidance to require greater participation by the CIO. However, additional actions are needed by the CIO. Specifically, related to the department’s incomplete implementation of its action plan to use the updated DHS TechStat process to provide support to failing or troubled programs, until the CIO finalizes the department’s TechStat policy, DHS will be limited in its ability to help such programs. In addition, the DHS CIO’s lack of review of certain contracts and agreements puts the department at risk of awarding duplicative or unnecessary contracts and agreements. As such, until the CIO updates the department’s IT Acquisition Review governance process to increase the number of contracts and agreements (associated with both major and non-major investments) that are reviewed by the CIO and appropriate delegates, the CIO will continue to have limited visibility into the department’s planned IT expenditures. Further, the department’s lack of knowledge about the specific staff or positions in its IT acquisition cadre; the skillsets it currently has; and the skills it needs to implement its new IT delivery model, reduces OCIO’s ability to ensure that it has all of the skillsets required. Without the CIO establishing time frames and implementing a plan for (1) identifying the specific staff or positions currently within its IT acquisition cadre; and (2) assessing whether these staff and positions address all of the specialized skills needed, as outlined in the Office of Federal Procurement Policy’s cadre guidance, the department risks not having the critical skills needed to effectively acquire IT services. Moreover, without the CIO establishing time frames and implementing a plan for (1) identifying future IT skillset needs to support DHS’s new delivery model, (2) conducting a skills gap analysis, and (3) resolving any skills gaps identified, the department will continue to be challenged in its ability to ensure that it has the skillsets necessary to perform the new responsibilities associated with the shift. DHS’s Under Secretary for Management has also taken actions aimed at implementing FITARA by updating the department’s acquisition policies and guidance documents. However, until the Under Secretary for Management makes additional updates to these acquisition policies and guidance documents to be consistent in identifying that the DHS CIO is to certify investments’ incremental development activities (as required by one of the department’s FITARA action plans), the CIO is at risk of not being included in important investment oversight activities. In addition, the contracts and interagency agreements for which DHS officials could not determine whether they were associated with a major investment is concerning. Until the Under Secretary for Management updates DHS headquarters’, Customs and Border Protection’s, and the U.S. Coast Guard’s processes to track, for all contracts and agreements, the IT investment with which each is associated (as applicable), the Under Secretary has limited assurances that these contracts and agreements will be reviewed by the appropriate officials. Lastly, the Under Secretary for Management’s recent policy change that limited the CIO’s input into risk ratings for certain major IT investments has devalued the CIO’s role. Until the Under Secretary updates and implements the process that the department uses for assessing the risks of major IT investments to ensure that the ratings reported to the IT Dashboard fully reflect the CIO’s assessment of each major IT investment, Congress’ and the public’s insight into the assessment of each major investment’s risk and performance will be limited. To ensure that DHS effectively implements FITARA, we are making seven recommendations to the Secretary of Homeland Security. Specifically, we are recommending that the Secretary of Homeland Security direct the Under Secretary for Management to direct the Chief Information Officer to take the following actions: finalize the department’s TechStat policy; update the department’s IT Acquisition Review governance process to increase the number of contracts and agreements (associated with both major and non-major investments) that are reviewed by the CIO and appropriate delegates; establish time frames and implement a plan for (1) identifying the specific staff or positions currently within the department’s IT acquisition cadre; and (2) assessing whether these staff and positions address all of the specialized skills and knowledge needed, as outlined in OMB’s Office of Federal Procurement Policy’s guidance for developing an IT acquisition cadre; and establish time frames and implement a plan for (1) identifying the department’s future IT skillset needs as a result of DHS’s new delivery model, (2) conducting a skills gap analysis, and (3) resolving any skills gaps identified. Further, we are recommending that the Secretary of Homeland Security direct the Under Secretary for Management to update the department’s acquisition policies and guidance to be consistent in identifying that the DHS CIO is to certify investments’ incremental development activities; update DHS headquarters’, Customs and Border Protection’s, and U.S. Coast Guard’s processes to track, for all contracts and agreements, the IT investment with which each is associated (as applicable); and update and implement the process DHS uses for assessing the risks of major IT investments to ensure that the CIO rating reported to the Dashboard fully reflects the CIO’s assessment of each major IT investment. DHS provided written comments on a draft of this report, which are reprinted in appendix II. In its comments, the department concurred with all seven of our recommendations and provided estimated completion dates for implementing each of them. For example, the department stated that, by June 30, 2017, its headquarters OCIO intends to develop a department-level plan for identifying the staff included in DHS’s IT acquisition cadre. Further, it said the DHS OCIO plans to require the components to develop associated component-level plans for identifying their IT acquisition cadres. In response to our recommendation that the Under Secretary for Management update DHS headquarters’ processes to track, for all contracts and agreements, the IT investment with which each is associated (as applicable), the department described recent actions that it had taken to implement this recommendation. Specifically, it stated that OCIO had updated the tool used as part of the IT Acquisition Review governance process to require that the contract number be provided for all acquisitions reviewed by headquarters OCIO. The department further noted that the tool also links each acquisition to the associated funding investment. The department reported that these updates were completed on January 31, 2017. We will follow-up with the department to obtain documentation demonstrating that the tool tracks this information. In response to oral comments that were also provided by DHS officials on a draft of this report, we clarified one of our recommendations. The department concurred with this clarified recommendation in its written comments. In addition, we received technical comments from DHS headquarters and component officials, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact Carol Harris at (202) 512-4456 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The table below lists DHS’s 131 Federal Information Technology Acquisition Reform Act (FITARA) action plans and the respective Office of Management and Budget (OMB) common baseline sections with which they are associated. Additionally, the table lists DHS’s planned implementation dates for each of the department’s FITARA action plans, as of April 2016, and identifies the 31 action plans that were included in GAO’s review. In addition to the contacts named above, the following staff also made key contributions to this report: Shannin O’Neill (Assistant Director); Emily Kuhn (Analyst-in-Charge); Mathew Bader; Ronalynn (Lynn) Espedido; Rebecca Eyler; Javier Irizarry; and Corey Rodriguez.
|
In 2014, Congress enacted IT reform legislation, referred to as FITARA, which includes provisions related to seven areas of IT acquisition management. In 2015, OMB released FITARA implementation guidance that outlined agency CIO responsibilities and required agencies to develop action plans for implementing the guidance. This report examines, among other things, the extent to which DHS has implemented selected action plans and the key challenges that DHS has faced in implementing selected FITARA provisions. To do so, GAO analyzed DHS's efforts to implement a sample of 31 of 109 action plans that DHS had reported as complete and that described later-stage implementation steps. To determine challenges, GAO analyzed and compared DHS documentation, including a random sample of IT-related contracts and agreements, to selected FITARA provisions to identify gaps between what was required by FITARA and what DHS had implemented. These provisions required, among other things, significant coordination between DHS headquarters and five components. The Department of Homeland Security (DHS) has fully implemented 28 of the 31 selected Federal Information Technology (IT) Acquisition Reform Act (FITARA) action plans; however, as of December 2016, DHS did not fulfill all aspects of 3 action plans. For example, one action plan is to use an updated process for reviewing troubled programs to provide support to such programs; however, DHS has not finalized its policy for this process. Until DHS ensures that these 3 plans are implemented, it will lack assurance that it is fulfilling FITARA's goals. DHS faces challenges in implementing certain FITARA provisions: Chief Information Officer (CIO) approval of contracts and agreements. FITARA requires, among other things, the agency CIO to review and approve IT contracts and agreements associated with major investments (e.g., high cost) prior to award. However, the CIO did not participate in the approval of any of the 48 contracts in GAO's sample associated with major investments. While DHS has made improvements to its review process, until the Office of the CIO determines how to increase its review of contracts and agreements, the CIO will continue to have limited visibility into planned IT expenditures. CIO evaluation of risk. DHS's Office of the CIO was conducting risk evaluations of major IT investments and updating the ratings on the Office of Management and Budget's (OMB) public website known as the IT Dashboard, as required by FITARA. However, in October 2016, DHS changed its process for evaluating 30 of DHS's 93 major IT investments and, as a result, the CIO is no longer primarily responsible for the evaluations or associated risk ratings that are publicly reported for these investments. Instead, multiple DHS organizations and officials are to evaluate these investments and the CIO's assessment only accounts for about 18 percent of the total score. Further, while under the old process, DHS's CIO was responsible for assessing these 30 investments against criteria that OMB guidance stated CIOs may use, under the new process, the CIO is only to assess these investments against one of OMB's criteria (see table below). This process change challenges the CIO's ability to publicly report risk ratings. Until DHS addresses these challenges, the goal of FITARA to elevate the role of the department CIO in acquisition management will not be fully realized. GAO is making 7 recommendations to DHS to ensure that it fully and effectively implements FITARA. Among other things, GAO recommends that DHS fully implement the action plans and address challenges related to CIO contract approval and evaluation of risk. DHS concurred with all 7 recommendations and provided estimated completion dates for implementing each of them.
|
In keeping with the FRA, IRS established a Records Management Program. The program provides for the application of management practices in the creation, maintenance, retrieval, preservation, and disposition of records. Although the mission of the Records Management Program is to document, protect, and efficiently manage all IRS records until final disposition, actual day-to-day management of case files is left to each IRS program. According to an IRS official, program managers establish policies and procedures for the management of paper and electronic case files and have overall responsibility for case file management in their program. In addition, each program develops its own policies and procedures, which are documented in IRS’s Internal Revenue Manual (IRM). Case files can be paper, automated, or a combination of both. For example, when IRS examines a tax return, the documents completed by the examiner can be automated (electronically maintained within their system) or paper (placed in physical case files). Case files may include income tax returns, audits, investigations, or claims. According to W&I and SB/SE officials, in fiscal year 2006 they closed over 13 million and 9 million cases, respectively. When cases are closed, IRS units perform managerial and quality reviews to ensure cases were handled correctly and the case files are complete and meet their quality standards. For example, in fiscal year 2006, the SB/SE field collection quality review group found that over 96 percent of the case files they reviewed were complete. Paper case files can be stored in the files area of Submission Processing Center (SPC) campuses or field offices which are spread throughout the country. These case files are then transferred to Federal Records Centers (FRC) by filing coordinators located at the SPC or by information resource coordinators in the field. Once the specified retention period has expired at the FRC, the case files are destroyed. The locations of Submission Processing Centers, Area Records Managers, and Federal Records Centers are illustrated in figure 1. According to an IRS official, the agency is in the process of automating some of its paper case files. IRS is developing scanning capability to convert documents from paper to electronic. However, this official stated that IRS is initially developing this scanning capability for its campuses and not its field offices. Furthermore, the agency does not have a plan or time frame for when all of its case files will be automated. As a result, IRS will be relying on paper case files for some years. Case files should be readily available for examination. However, IRS does not know whether it locates all of the paper case files it requests. In the absence of data on IRS’s success in retrieving paper case files, several sources of anecdotal information give some indication of the potential extent of problems in locating case files. Missing case files can result in lost revenue, create unnecessary taxpayer burden, make cases unavailable for other units such as quality review groups or advisory groups, and hinder congressional oversight. Paper case files are crucial for cases scheduled for Tax Court or District Court. Without complete case files, IRS can lose those cases. For example, advisory staff informed us of several instances where IRS lost revenue in District Court cases because a case file could not be located. In one case, IRS obtained approval to file a nominee lien against a taxpayer because the taxpayer had transferred an asset to another person. IRS pursued collection action, but the taxpayer challenged IRS in District Court asserting that IRS needed to prove that the taxpayer had use and control of the asset that was transferred. IRS was unable to locate the case file for the trial. As a result, IRS lost about $45,000. In another case, a taxpayer commingled individual assets with corporate assets. IRS levied the funds in the bank account of an individual who controlled the corporation to pay the tax debt owed by the corporation. IRS collected the money owed, but the taxpayer filed a claim that IRS needed to prove the money in the individual account belonged to the corporation. IRS had this information in the case file, but could not locate it. Therefore, IRS had to return the levied money, which amounted to about $40,000. IRS has acknowledged its historic difficulties in locating and retrieving case files. In September 2006, the Tax Court proposed a requirement that IRS file answers in all small tax cases ($50,000 or less). An answer includes admissions, qualifications, and denials by IRS of each material fact alleged in the Tax Court petition filed by the taxpayer. In preparation of the answer, IRS generally examines and relies on the information in the case file. When IRS responded to this tax court proposal, IRS stated that due to difficulties in locating and retrieving case files, the requirement to answer all small tax case petitions may lead to a number of motions to extend time in which to answer. Appeals informed us that they are having difficulty obtaining case files that are docketed for Tax Court, and they are working with the Business Units to try and determine the cause. For example, W&I recently made a commitment to send docketed case files to Appeals within 25 days. To assess W&I’s performance, W&I requested Appeals to provide them with a report so they could track their progress. Appeals developed a report which showed that out of about 900 docketed cases from W&I campuses, Appeals had been waiting over 25 days for 420, or 46 percent, of the case files. If Appeals cannot obtain a case file prior to the scheduled court date, IRS could lose the case and any revenue owed by the taxpayer. When IRS staff request paper case files, several attempts may be made to locate them. For example, if a request is input into the Integrated Data Retrieval System (IDRS) and the case file is not located, another request may be input into IDRS. IRS may also perform special searches when normal searches are unsuccessful. When the paper case files still cannot be located, IRS may attempt to re-create them. This creates an additional cost and burden to IRS, which must use its scarce resources to perform special searches and re-create case files. To re-create case files, IRS obtains what information it can from its own systems. If IRS needs additional information from the taxpayer, IRS may contact the taxpayer and request this information. This process can result in unnecessary taxpayer burden. Some paper case files should be sent from field offices to campus file areas within a specified number of days after the case is closed. However, campus staff said they have had difficulty receiving the case files in the time specified. If case files are not sent to storage areas promptly, these case files may not be available for other units, such as quality review groups or advisory groups in performing their tasks. In Collection Field Offices, some types of case files should be sent to the Centralized Case Processing (CCP) Unit as soon as they are closed so that quality review samples can be readily available for selection. CCP has 21 days to obtain the closed case files for review. In a report provided by quality review staff, for the first quarter of fiscal year 2007, CCP was able to obtain only 45 percent of the cases requested for review within the 21- day period. Since collection realizes that CCP will not be able to obtain all of the case files requested, the number of case files requested is double the number needed to allow for cases that cannot be located. According to Collection officials in one field office, there are several reasons case files are not sent to CCP in a timely manner. For example, for Revenue Officers, sending a case file to CCP is not a top priority, especially when their other priorities include active aged cases. Aged cases take priority because the older the case is, the less likely IRS is to recover the amounts owed. In addition, some Revenue Officers are reluctant to let their cases leave their office. We were also informed by these officials that the number of support staff who assist in sending the case files to CCP has been reduced, and in one instance case files were found in a secretary’s desk drawer. In two recent GAO audits, we asked IRS to pull random samples of paper case files, but IRS was unable to locate all of the case files requested. The portion of case files IRS could not locate ranged from about 10 to 14 percent. For example, in one audit, we requested examination case files from NRP, but IRS was unable to locate about 10 percent of the case files requested. NRP staff were notified that 61 of the case files we requested had been sent to them, but NRP officials said they never received these case files. In a second audit, we requested Collection Due Process (CDP) case files where the collection action had been appealed. IRS was unable to locate about 14 percent of these case files. According to IRS staff that assisted us with these requests, they had made several requests to obtain these case files, but IRS was still unable to locate them. TIGTA has also requested random samples of paper case files in some of its reviews where IRS was unable to provide all of the case files requested. In one TIGTA review, TIGTA was to determine whether IRS had an adequate system to ensure tax records could be located and received timely. According to TIGTA, these records included the following types of case files: individual and business examination cases, Earned Income Tax Credit adjustments, and Trust Fund Recovery Penalty (TFRP) cases. IRS was unable to provide about 19 percent of the case files requested by TIGTA. In the review, TIGTA recommended that IRS: Ensure the quality and timeliness of the IRS Records Management Program by developing a method to track specific requests for tax records that will assign accountability, respond to customer problems, and provide management information for the program. According to IRS, it is in the process of modifying IDRS by adding a tracking feature, which will allow IRS to better track requests and provide improved service to requesters. IRS anticipates these modifications will be completed by January 2008. Update procedures to include instructions for requesting workpapers for TFRP assessments, including guidance for determining to which local field office a request should be sent. According to IRS, it has finalized procedures that address the retrieval of workpapers for TFRP assessments. In addition, IRS has developed online training for Information Resource Coordinators (IRC) and interactive records management process guides that are available to all IRS employees. Ensure the quality and timeliness of the IRS Records Management Program by developing a method to track specific requests for workpapers for TFRP assessments that will assign accountability, respond to customer problems, and provide management information for the program. According to IRS, it modified the Automated Trust Fund Recovery System to provide a standardized procedure for locating, retrieving, and controlling TFRP workpapers. In another TIGTA review, TIGTA was to determine whether IRS complied with the legal guidelines and procedures for the filing of a notice of lien or a notice of intent to levy and the right of the taxpayer to appeal. However, TIGTA could not determine whether IRS complied with legal guidelines and required procedures because about 8 percent of the case files requested could not be located. IRS has not developed the data needed to measure the performance of its case file processes, such as whether all of the paper case files it requests are located or received timely. Developing such data can assist IRS in determining how well it is complying with FRA and internal control requirements, where the process may be breaking down, and what process improvements to make. IRS staff request paper case files from IRS campuses and field offices and from FRCs. According to an IRS official, the manner in which case files are requested varies in each program. A filing coordinator told us that when a paper case file such as an examination case file with a Document Locator Number (DLN) has been closed and sent to the files area of a campus for storage and is later requested, the requester enters information about the case file such as the DLN into IDRS. This generates a form which the files area receives and uses to locate the case file. The coordinator also said that when the files area receives the form, files staff will either search for the case file in the files area or request it from the FRC if it has been sent for long-term storage. In contrast, we were told by an Area Records Manager (ARM), when a paper case file such as a collection case file that does not contain a DLN has been closed and sent directly to the FRC is later requested, a form is sent to the ARM which includes the FRC accession number, box number, and location of the case file which is needed by the FRC to locate the record. According to AWSS and W&I officials, regardless of the method used to request the case file, IRS does not track data on whether all of the case files it requests are located or how long it takes to retrieve them. Furthermore, these officials added that IRS does not track the reasons case files cannot be located. When case files are requested by IRS staff, errors can occur in the process of requesting, locating, and sending case files as suggested by IRS officials in figure 2. Although campuses use forms that identify some reasons case files cannot be located, campuses do not track these reasons according to an IRS official. IRS has hired contractors to take over the responsibility of the files areas. Contractors are performing files activities at two campuses and will take over responsibility for the remaining five campuses between August and October 2007. The contractors are required to adhere to a Performance Work Statement (PWS) which lists the contractor’s duties, as well as IRM policies and procedures that are specified in the PWS. However, neither the PWS nor the IRM require that staff track the reasons paper case files cannot be located. Therefore, the contractors are not performing these tasks. In the three previously mentioned GAO audits, we asked IRS staff who requested our case files why the case files could not be located. IRS staff said they requested the case files from many campuses. However, there was no one in charge of the requests to track the progress of our file requests and provide specific reasons why the case files could not be located. For example, the ARMs act as liaisons with the FRCs and IRS Business Units and can assist staff in tracking and locating case files from the FRC. However, some staff who requested our case files did not request assistance from their ARM either because the ARM did not handle the case files sent to the FRC and left that up to the campus coordinators, or the staff who requested the case files said they were not aware they had ARMs. In addition, campus coordinators can assist in tracking and locating case files. However, one campus coordinator told us that many IRS staff may not know who their campus coordinators are. As mentioned previously, IRS staff told us that no one was in charge of our case file requests to provide information on the reasons the case files could not be located. This lack of clear responsibility differs from the way IRS manages document requests during our annual audit of IRS’s financial statements. According to an IRS official, IRS uses many designated coordinators and holds meetings to ensure that documents requested for the financial statement audit are received. If IRS had established clearer responsibility for coordinating and tracking our case file requests, such as it did to manage document requests during our financial statement audit, it might have located more of the case files we requested. IRS staff told us that when case files cannot be located, either the files area or FRC should annotate the request form to show the reason the case could not be located. In our Appeals audit, an Appeals official said the person who had requested the case files had retired so they could not provide a reason why the case files could not be located. For the other two audits, IRS did not always provide the reasons the case files could not be located. In some of our cases IRS staff said: No form was provided to identify why the case file could not be located. A form was provided but it was not annotated as to why the case file could not be located. An IDRS Transaction Record was received which does not include reasons why files are not found. According to a files coordinator, when an examination of a tax return is performed and an assessment is made, an IDRS Transaction Record is generated, which is a printout of the transaction. The files area should receive the transaction record, associate it with the case file, and then file them together in the files area according to their DLN. However, if the case file is not received, the Transaction Record is placed in the file by itself with no indication that the case file was not received. Therefore, when the files area later tries to fill a request for that case file, the files area does not know why the case file is not there. The case file may not be there because it was not received or was misfiled. If documentation were placed with the Transaction Record indicating the case file was not received, the files area could provide that information to the requester. TIGTA experienced similar problems during one of its reviews mentioned previously. TIGTA requested a statistical sample of tax records which included about 1,000 case files, of which about 190 were not provided. In 64 percent of the cases that were not provided, IRS did not provide a form to identify why the case file could not be located. In 5 percent of the cases, IRS provided an incomplete response. In 26 percent of the cases, the case files were not found because the information in the request did not match the information in the files area of the campus or at the FRC. In 5 percent of the cases, IRS provided the wrong case file. IRS could use performance information to assess whether its agencywide case file management meets FRA and internal control standards requiring an economic and efficient management of records and that files be readily available for examination. Managers can use performance information to identify problems in existing programs, to try to identify the causes of problems, and/or to develop corrective actions. An important part of establishing performance measures is to identify which programs are to be measured (e.g., determining which IRS programs create a significant number of paper case files since some programs primarily create automated case files) and which aspects of program performance are the most important to measure (e.g., tracking the reasons paper case files cannot be located). Data collected for performance measures should be complete, accurate, and consistent enough to document performance and support decision making. The offices that have responsibility for making programs work should be responsible for developing performance measures. A clear connection between performance measures and program offices helps to both reinforce accountability and ensure that, in their day-to-day activities, managers keep in mind the outcomes their organization is striving to achieve. According to an AWSS official, AWSS has overall responsibility for the Records Management Program, but its focus is on record retention and coordination with FRCs, while case file management is left to the program managers. AWSS has developed some performance measures which have been recently instituted, such as cycle time and the number of records that could not be located at the FRC. However, AWSS is unable to separately track paper case file results. According to the same official, program managers have day-to-day responsibility for case file management within their programs. However, program managers such as those in W&I have said they have not developed performance information to measure how well the programs are managing their case files. When we asked program managers who had overall responsibility for case file management across IRS’s compliance programs, the program managers said they did not know. Without overall responsibility for case file management being clearly defined, IRS may not be able to develop performance information across all of its programs to determine how well paper case files are managed to achieve performance targets and whether its case file management processes are in accordance with FRA and internal control standards. IRS has options available that may improve the management of case files. However, IRS does not have data to determine which of the improvements are the most cost effective for IRS. Improving case file management may result in additional costs to IRS once the agency determines which actions to take. Improvements may also result in cost savings by reducing the amount of resources used to locate case files when multiple requests are made or case files need to be re-created. Improvements to case file management could include additional training and guidance. For example, IRS could provide training to IRS staff on providing the correct DLN when requesting paper case files. In addition, IRS could provide additional guidance to files coordinators on managing case files as contractors take over responsibility for the campus file areas. Another option for IRS could be to expand one of its closed case file tracking systems to track more case files and more complete information about their location. IRS managers have said that the agency does not have one centralized system to track paper case files. Instead, IRS uses many systems in its enforcement process to track open case files. However, these systems generally do not contain information on the location of case files at the FRC once they are sent there. Instead, this information is maintained manually throughout IRS. This can make it difficult to quickly locate files. For example, according to an IRS official, when Revenue Agents need a case file from outside their office, they have to determine the state and office the case was worked in and the year it was closed. Once they have identified this information, they have to go through that office to locate the FRC paperwork including the accession number, box number, and location. This information is needed to request the case file from the FRC. IRS does not know whether staff, such as Revenue Agents, are having difficulty obtaining paper case files when they use this process to locate them. To automate the location of paper case files sent to the FRC, SB/SE Collection officials said they enter FRC information along with other case information into its Junior system. Junior has been designed as a case processing, closed files inventory tracking system for cases that are being sent to the FRC. While IRS officials told us this system has improved IRS’s ability to locate paper case files, they provided no data to support this conclusion. In addition, the Junior system is only used by SB/SE Collection for specific types of cases (taxpayer delinquency investigations, taxpayer delinquency accounts, and installment agreements). However, an AWSS official suggested that use of Junior by other programs and operating divisions could improve locating closed paper case files. Using other technologies is another option IRS could consider to assist it in capturing real-time information on the location of paper case files. For example, field and headquarters officials from the Department of Veterans Affairs (VA) provided the following description of how it uses barcoding to track its benefit folders through its Control of Veterans Records System (COVERS). The Veterans Benefits Administration tracks its benefit folders by placing a barcode on each folder. Employees in the benefit office use a barcode scanner and printer. As the folders are moved, they are scanned and the location is automatically changed in VA’s record tracking system. VA’s tracking system includes information on the claim number, veteran’s name, and the current and previous locations of the case. The system not only tracks the location of benefit folders; it also assists in the management of these folders throughout the claims processing cycle. For example, reports can be run, such as aging reports which show how long a VA staff member has had a case. The VA officials also described the following benefits realized in barcoding benefit folders although they have no data to support this conclusion. Benefits of barcoding include the ability to scan the location of a benefit folder into their tracking system instead of entering it manually, which saves time and provides improved folder tracking accuracy. COVERS has also improved the files management activity in two additional ways: first, folder retirement was transformed from the previous manual IBM card-based process to the current automated process; second, the sequence checking of files in cabinets has been expedited while improving accuracy. An IRS Appeals official told us that Business Units receive listings of cases docketed for Tax Court and it is their responsibility to review the listings and send any docketed cases directly to Appeals. If the Business Units do not send the case files to Appeals and send them to storage areas instead (e.g., Files Area), this can result in long delays in receiving case files and potential Tax Court losses. According to Appeals officials, Appeals and W&I are tracking the number of days Appeals is waiting for docketed cases. This is a good first step in tracking whether case files are received timely. However, an IRS official said that IRS overall does not track the timeliness of case file receipt. Therefore, IRS lacks information that would be useful in determining whether implementing a barcoding system to capture real-time information on the location of case files would be beneficial. IRS does not have an effective process to ensure that paper case files can be located and made readily available for examination. Further, IRS has acknowledged its historic difficulties in locating and retrieving case files. IRS has lost revenue when it could not locate cases for District Court. IRS also could not provide a significant percentage of the cases requested in GAO and TIGTA audits. Failure to locate case files can create unnecessary taxpayer burden and make case files unavailable for other units in performing their tasks. IRS has not developed performance measures or the data needed to measure the performance of its case file processes, such as whether all of the paper case files it requests are located or received timely. Instituting and monitoring performance measures across all of IRS’s compliance programs could assist the agency in (1) determining how well IRS is complying with FRA and internal control requirements, (2) identifying problems and the causes in existing programs, and/or (3) developing corrective actions. To improve IRS’s paper case file processes, the agency may incur additional costs once the agency determines which improvements it will make. However, the agency may also realize cost savings by reducing the amount of resources used to locate case files when multiple requests are made or case files need to be re-created. Developing performance measures may take IRS some time. However, IRS has some opportunities to improve case file management that can be performed more expeditiously, such as improving the coordination of large samples of case files, providing information to staff on who to contact when case files cannot be located, and ensuring case files are sent to storage areas as soon as they are closed. We are making recommendations to the Acting Commissioner of Internal Revenue to ensure that paper case files are readily available for examination. Specifically, we recommend that the Acting Commissioner: Ensure that case files are managed in accordance with FRA and internal control standards, including: tracking the number of paper case files that cannot be located or are received untimely and the reasons why and developing performance measures to monitor the effectiveness of the paper case file process to assist in determining which options for improving paper case file management are the most beneficial for IRS. Ensure that case file performance is monitored across IRS’s compliance programs by clearly defining responsibility for doing so. Establish clearer responsibility for coordinating large samples of case files to provide specific reasons why case files cannot be located. Reiterate to staff who the campus coordinators and Area Records Managers are for requesting paper case files. Ensure that paper case files are sent to storage areas as soon as the cases are closed; include with the IDRS Transaction Record documentation that indicates whether a case file was sent to storage; and track the number and location of paper case files that have not been sent to storage. The Acting Commissioner of Internal Revenue provided written comments on a draft of this report in a September 18, 2007, letter, which is reprinted in appendix II. The Acting Commissioner agreed that staff often need paper case files to fulfill mission-critical requirements and that the failure to locate specific case files can affect tax administration and customer service. The Acting Commissioner agreed that IRS needs to review its Records Management Program. However, instead of commenting on our specific recommendations, the Acting Commissioner stated that IRS will form a cross-functional working group to review the Records Management Program and develop corrective action, taking into account our recommendations. The Acting Commissioner also described some systemic enhancements undertaken by W&I to facilitate requests for tax returns and associated records. IRS’s planned review and development of corrective actions is responsive to our recommendations, and we look forward to its consideration of our recommendations in this review as well as to the benefits to taxpayers and IRS of a more effective program. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to appropriate congressional committees and the Acting Commissioner of Internal Revenue. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9110 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of this assignment were to review the Internal Revenue Service’s (IRS) case file storage, tracking, and documentation process to determine whether IRS has (1) an effective process to ensure that paper case files can be located timely and (2) sufficient data to assess the performance of its paper case file processes. To make determinations in both areas, we interviewed over 50 program executives, managers, and staff from the Office of Agency-Wide Shared Services, the Small Business/Self-Employed Division, the Wage and Investment Division, the Office of Appeals, the Automated Collection System (ACS), and the National Research Program (NRP) to gain an understanding of the case file storage, tracking, and documentation process. We also visited one campus and one field office to gain an understanding of the processes they use to store, track, and document case files. We observed and inquired into the internal controls used by IRS in these processes. We did not interview officials from the Large and Mid- Size Business Division or the Tax-Exempt and Government Entities Division since only a very small percentage of Appeals’ cases that are docketed for Tax Court where paper case files are very important originate from these divisions. We also reviewed IRS’s policies and procedures. To determine whether IRS has an effective process to ensure that paper case files can be located timely, we compared IRS’s process to the requirements of the Federal Records Act and our standards for internal control in the federal government. We interviewed officials from Appeals, ACS, and NRP to identify the reasons IRS was unable to locate all of the case files we requested. We also spoke to Treasury Inspector General for Tax Administration (TIGTA) officials and reviewed prior TIGTA reports to determine whether IRS was able to provide all case files requested by TIGTA. Further, we interviewed officials from other organizations to identify key practices in managing case files. These organizations included the Department of Education, the Department of Veterans Affairs, the Social Security Administration, and the California Franchise Tax Board. We conducted our work in Washington, D.C., and one campus and field office from August 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact person named above, Jonda Van Pelt, Assistant Director; Carl Barden; Wilfred Holloway; Laurie King; Donna Miller; Cheryl Peterson; and Sam Scrutchins made key contributions to this report.
|
Proper paper case file management is a significant issue for the Internal Revenue Service (IRS) because its staff investigate and close millions of case files every year. In addition, IRS employees depend heavily on case files when pursuing enforcement actions. GAO was asked to review IRS's case file storage, tracking, and documentation processes to determine whether IRS has (1) an effective process to ensure that paper case files can be located timely and (2) sufficient data to assess the performance of its paper case file processes. To review these processes, GAO interviewed staff who request case files and case file managers. IRS does not have an effective process to ensure that paper case files can be located within the requesters' time frames. Missing case files can result in lost revenue, create unnecessary taxpayer burden, and make case files unavailable for other units such as quality review groups or advisory groups. IRS has acknowledged its historic difficulties in locating and retrieving case files. When IRS cannot locate paper case files, it may attempt to re-create them by requesting information from taxpayers, which can result in unnecessary taxpayer burden. Difficulties in locating case files can also hinder congressional oversight. When GAO requested case files in two prior audits, IRS could not locate all of the case files requested. IRS does not have sufficient data to assess the performance of its paper case file management processes. Having such data would enable IRS to assess whether its case management processes are in accordance with FRA and internal control standards. IRS does not track whether all of the case files it requests are located or received timely, or the reasons why case files cannot be located. If IRS developed this type of data, officials could use this data to identify problems in existing programs, to try to identify the causes of problems, and/or to develop corrective actions. Records management officials have recently instituted some performance measures, but these measures do not specifically address paper case files. IRS program managers also have not developed performance measures or data to determine how well paper case files are managed to achieve performance targets. Program managers do not know who has overall responsibility for case file management so performance information cannot be developed across IRS's compliance programs. GAO identified some potential improvements that IRS can consider, but IRS will need to determine which improvements are the most cost effective.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.